Lookup table

Last updated

In computer science, a lookup table is an array that replaces runtime computation with a simpler array indexing operation. The savings in terms of processing time can be significant, since retrieving a value from memory is often faster than undergoing an "expensive" computation or input/output operation. [1] The tables may be precalculated and stored in static program storage, calculated (or "pre-fetched") as part of a program's initialization phase (memoization), or even stored in hardware in application-specific platforms. Lookup tables are also used extensively to validate input values by matching against a list of valid (or invalid) items in an array and, in some programming languages, may include pointer functions (or offsets to labels) to process the matching input. FPGAs also make extensive use of reconfigurable, hardware-implemented, lookup tables to provide programmable hardware functionality.

Computer science Study of the theoretical foundations of information and computation

Computer science is the study of processes that interact with data and that can be represented as data in the form of programs. It enables the use of algorithms to manipulate, store, and communicate digital information. A computer scientist studies the theory of computation and the practice of designing software systems.

In computer science, an array data structure, or simply an array, is a data structure consisting of a collection of elements, each identified by at least one array index or key. An array is stored such that the position of each element can be computed from its index tuple by a mathematical formula. The simplest type of data structure is a linear array, also called one-dimensional array.

In computing, input/output or I/O is the communication between an information processing system, such as a computer, and the outside world, possibly a human or another information processing system. Inputs are the signals or data received by the system and outputs are the signals or data sent from it. The term can also be used as part of an action; to "perform I/O" is to perform an input or output operation.

Contents

History

Part of a 20th-century table of common logarithms in the reference book Abramowitz and Stegun. Abramowitz&Stegun.page97.agr.jpg
Part of a 20th-century table of common logarithms in the reference book Abramowitz and Stegun.

Before the advent of computers, lookup tables of values were used to speed up hand calculations of complex functions, such as in trigonometry, logarithms, and statistical density functions. [2]

Trigonometry In geometry, study of the relationship between angles and lengths

Trigonometry is a branch of mathematics that studies relationships between side lengths and angles of triangles. The field emerged in the Hellenistic world during the 3rd century BC from applications of geometry to astronomical studies. In particular, 3rd-century astronomers first noted that the ratio of the lengths of two sides of a right-angled triangle depends only on one acute angle of the triangle. These dependencies are now called trigonometric functions.

Common logarithm mathematical function

In mathematics, the common logarithm is the logarithm with base 10. It is also known as the decadic logarithm and as the decimal logarithm, named after its base, or Briggsian logarithm, after Henry Briggs, an English mathematician who pioneered its use, as well as standard logarithm. Historically, it was known as logarithmus decimalis or logarithmus decadis. It is indicated by log10(x), or sometimes Log(x) with a capital L. On calculators, it is usually "log", but mathematicians usually mean natural logarithm rather than common logarithm when they write "log". To mitigate this ambiguity, the ISO 80000 specification recommends that log10(x) should be written lg(x) and loge(x) should be ln(x).

In ancient (499 AD) India, Aryabhata created one of the first sine tables, which he encoded in a Sanskrit-letter-based number system. In 493 AD, Victorius of Aquitaine wrote a 98-column multiplication table which gave (in Roman numerals) the product of every number from 2 to 50 times and the rows were "a list of numbers starting with one thousand, descending by hundreds to one hundred, then descending by tens to ten, then by ones to one, and then the fractions down to 1/144" [3] Modern school children are often taught to memorize "times tables" to avoid calculations of the most commonly used numbers (up to 9 x 9 or 12 x 12).

Aryabhata Indian mathematician-astronomer

Aryabhata or Aryabhata I was the first of the major mathematician-astronomers from the classical age of Indian mathematics and Indian astronomy. His works include the Āryabhaṭīya and the Arya-siddhanta.

Victorius of Aquitaine, a countryman of Prosper of Aquitaine and also working in Rome, produced in AD 457 an Easter Cycle, which was based on the consular list provided by Prosper's Chronicle. This dependency caused scholars to think that Prosper had been working on his own Easter Annals for quite some time. In fact, Victorius published his work only two years after the final publication of Prosper's Chronicle.

Roman numerals Numbers in the Roman numeral system

Roman numerals are a numeral system that originated in ancient Rome and remained the usual way of writing numbers throughout Europe well into the Late Middle Ages. Numbers in this system are represented by combinations of letters from the Latin alphabet. Modern usage employs seven symbols, each with a fixed integer value:

Early in the history of computers, input/output operations were particularly slow – even in comparison to processor speeds of the time. It made sense to reduce expensive read operations by a form of manual caching by creating either static lookup tables (embedded in the program) or dynamic prefetched arrays to contain only the most commonly occurring data items. Despite the introduction of systemwide caching that now automates this process, application level lookup tables can still improve performance for data items that rarely, if ever, change.

Cache (computing) computing component that transparently stores data so that future requests for that data can be served faster

In computing, a cache is a hardware or software component that stores data so that future requests for that data can be served faster; the data stored in a cache might be the result of an earlier computation or a copy of data stored elsewhere. A cache hit occurs when the requested data can be found in a cache, while a cache miss occurs when it cannot. Cache hits are served by reading data from the cache, which is faster than recomputing a result or reading from a slower data store; thus, the more requests that can be served from the cache, the faster the system performs.

Lookup tables were one of the earliest functionalities implemented in computer spreadsheets, with the initial version of VisiCalc (1979) including a LOOKUP function among its original 20 functions. [4] This has been followed by subsequent spreadsheets, such as Microsoft Excel, and complemented by specialized VLOOKUP and HLOOKUP functions to simplify lookup in a vertical or horizontal table.

A spreadsheet is an interactive computer application for organization, analysis and storage of data in tabular form. Spreadsheets developed as computerized analogs of paper accounting worksheets. The program operates on data entered in cells of a table. Each cell may contain either numeric or text data, or the results of formulas that automatically calculate and display a value based on the contents of other cells. A spreadsheet may also refer to one such electronic document.

VisiCalc spreadsheet software

VisiCalc was the first spreadsheet computer program for personal computers, originally released for the Apple II by VisiCorp. It is often considered the application that turned the microcomputer from a hobby for computer enthusiasts into a serious business tool, prompting IBM to introduce the IBM PC two years later. VisiCalc is considered the Apple II's killer app. It sold over 700,000 copies in six years, and as many as 1 million copies over its history.

Microsoft Excel Spreadsheet editor, part of Microsoft Office

Microsoft Excel is a spreadsheet developed by Microsoft for Windows, macOS, Android and iOS. It features calculation, graphing tools, pivot tables, and a macro programming language called Visual Basic for Applications. It has been a very widely applied spreadsheet for these platforms, especially since version 5 in 1993, and it has replaced Lotus 1-2-3 as the industry standard for spreadsheets. Excel forms part of the Microsoft Office suite of software.

Examples

Simple lookup in an array, an associative array or a linked list (unsorted list)

This is known as a linear search or brute-force search, each element being checked for equality in turn and the associated value, if any, used as a result of the search. This is often the slowest search method unless frequently occurring values occur early in the list. For a one-dimensional array or linked list, the lookup is usually to determine whether or not there is a match with an 'input' data value.

In computer science, a linear search or sequential search is a method for finding an element within a list. It sequentially checks each element of the list until a match is found or the whole list has been searched.

In computer science, brute-force search or exhaustive search, also known as generate and test, is a very general problem-solving technique and algorithmic paradigm that consists of systematically enumerating all possible candidates for the solution and checking whether each candidate satisfies the problem's statement.

In computer science, a Linked list is a linear collection of data elements, whose order is not given by their physical placement in memory. Instead, each element points to the next. It is a data structure consisting of a collection of nodes which together represent a sequence. In its most basic form, each node contains: data, and a reference to the next node in the sequence. This structure allows for efficient insertion or removal of elements from any position in the sequence during iteration. More complex variants add additional links, allowing more efficient insertion or removal of nodes at arbitrary positions. A drawback of linked lists is that access time is linear. Faster access, such as random access, is not feasible. Arrays have better cache locality compared to linked lists.

Binary search in an array or an associative array (sorted list)

An example of a "divide and conquer algorithm", binary search involves each element being found by determining which half of the table a match may be found in and repeating until either success or failure. This is only possible if the list is sorted but gives good performance even if the list is lengthy.

Trivial hash function

For a trivial hash function lookup, the unsigned raw data value is used directly as an index to a one-dimensional table to extract a result. For small ranges, this can be amongst the fastest lookup, even exceeding binary search speed with zero branches and executing in constant time.

Counting bits in a series of bytes

One discrete problem that is expensive to solve on many computers is that of counting the number of bits which are set to 1 in a (binary) number, sometimes called the population function . For example, the decimal number "37" is "00100101" in binary, so it contains three bits that are set to binary "1".

A simple example of C code, designed to count the 1 bits in a int, might look like this:

intcount_ones(unsignedintx){intresult=0;while(x!=0)result++,x=x&(x-1);returnresult;}

This apparently simple algorithm can take potentially hundreds of cycles even on a modern architecture, because it makes many branches in the loop - and branching is slow. This can be ameliorated using loop unrolling and some other compiler optimizations. There is however a simple and much faster algorithmic solution - using a trivial hash function table lookup.

Simply construct a static table, bits_set, with 256 entries giving the number of one bits set in each possible byte value (e.g. 0x00 = 0, 0x01 = 1, 0x02 = 1, and so on). Then use this table to find the number of ones in each byte of the integer using a trivial hash function lookup on each byte in turn, and sum them. This requires no branches, and just four indexed memory accesses, considerably faster than the earlier code.

/* Pseudocode of the lookup table 'uint32_t bits_set[256]' *//*                    0b00, 0b01, 0b10, 0b11, 0b100, 0b101, ... */intbits_set[256]={0,1,1,2,1,2,// 200+ more entries/* (this code assumes that 'int' is an unsigned 32-bits wide integer) */intcount_ones(unsignedintx){returnbits_set[x&255]+bits_set[(x>>8)&255]+bits_set[(x>>16)&255]+bits_set[(x>>24)&255];}

The above source can be improved easily, (avoiding AND'ing, and shifting) by 'recasting' 'x' as a 4 byte unsigned char array and, preferably, coded in-line as a single statement instead of being a function. Note that even this simple algorithm can be too slow now, because the original code might run faster from the cache of modern processors, and (large) lookup tables do not fit well in caches and can cause a slower access to memory (in addition, in the above example, it requires computing addresses within a table, to perform the four lookups needed).

Lookup tables in image processing

Red (A), Green (B), Blue (C) 16 bit Look Up Table file sample. (Lines 14 to 65524 not shown) Red Green Blue 16 bit Look up Table Sample.jpg
Red (A), Green (B), Blue (C) 16 bit Look Up Table file sample. (Lines 14 to 65524 not shown)

In data analysis applications, such as image processing, a lookup table (LUT) is used to transform the input data into a more desirable output format. For example, a grayscale picture of the planet Saturn will be transformed into a color image to emphasize the differences in its rings.

A classic example of reducing run-time computations using lookup tables is to obtain the result of a trigonometry calculation, such as the sine of a value. Calculating trigonometric functions can substantially slow a computing application. The same application can finish much sooner when it first precalculates the sine of a number of values, for example for each whole number of degrees (The table can be defined as static variables at compile time, reducing repeated run time costs). When the program requires the sine of a value, it can use the lookup table to retrieve the closest sine value from a memory address, and may also take the step of interpolating to the sine of the desired value, instead of calculating by mathematical formula. Lookup tables are thus used by mathematics co-processors in computer systems. An error in a lookup table was responsible for Intel's infamous floating-point divide bug.

Functions of a single variable (such as sine and cosine) may be implemented by a simple array. Functions involving two or more variables require multidimensional array indexing techniques. The latter case may thus employ a two-dimensional array of power[x][y] to replace a function to calculate xy for a limited range of x and y values. Functions that have more than one result may be implemented with lookup tables that are arrays of structures.

As mentioned, there are intermediate solutions that use tables in combination with a small amount of computation, often using interpolation. Pre-calculation combined with interpolation can produce higher accuracy for values that fall between two precomputed values. This technique requires slightly more time to be performed but can greatly enhance accuracy in applications that require the higher accuracy. Depending on the values being precomputed, pre-computation with interpolation can also be used to shrink the lookup table size while maintaining accuracy.

In image processing, lookup tables are often called LUT s (or 3DLUT), and give an output value for each of a range of index values. One common LUT, called the colormap or palette , is used to determine the colors and intensity values with which a particular image will be displayed. In computed tomography, "windowing" refers to a related concept for determining how to display the intensity of measured radiation.

While often effective, employing a lookup table may nevertheless result in a severe penalty if the computation that the LUT replaces is relatively simple. Memory retrieval time and the complexity of memory requirements can increase application operation time and system complexity relative to what would be required by straight formula computation. The possibility of polluting the cache may also become a problem. Table accesses for large tables will almost certainly cause a cache miss. This phenomenon is increasingly becoming an issue as processors outpace memory. A similar issue appears in rematerialization, a compiler optimization. In some environments, such as the Java programming language, table lookups can be even more expensive due to mandatory bounds-checking involving an additional comparison and branch for each lookup.

There are two fundamental limitations on when it is possible to construct a lookup table for a required operation. One is the amount of memory that is available: one cannot construct a lookup table larger than the space available for the table, although it is possible to construct disk-based lookup tables at the expense of lookup time. The other is the time required to compute the table values in the first instance; although this usually needs to be done only once, if it takes a prohibitively long time, it may make the use of a lookup table an inappropriate solution. As previously stated however, tables can be statically defined in many cases.

Computing sines

Most computers only perform basic arithmetic operations and cannot directly calculate the sine of a given value. Instead, they use the CORDIC algorithm or a complex formula such as the following Taylor series to compute the value of sine to a high degree of precision:

(for x close to 0)

However, this can be expensive to compute, especially on slow processors, and there are many applications, particularly in traditional computer graphics, that need to compute many thousands of sine values every second. A common solution is to initially compute the sine of many evenly distributed values, and then to find the sine of x we choose the sine of the value closest to x. This will be close to the correct value because sine is a continuous function with a bounded rate of change. For example:

real array sine_table[-1000..1000]  for x from -1000 to 1000      sine_table[x] := sine(pi * x / 1000)
function lookup_sine(x)      return sine_table[round(1000 * x / pi)]
Linear interpolation on a portion of the sine function Interpolation example linear.svg
Linear interpolation on a portion of the sine function

Unfortunately, the table requires quite a bit of space: if IEEE double-precision floating-point numbers are used, over 16,000 bytes would be required. We can use fewer samples, but then our precision will significantly worsen. One good solution is linear interpolation, which draws a line between the two points in the table on either side of the value and locates the answer on that line. This is still quick to compute, and much more accurate for smooth functions such as the sine function. Here is an example using linear interpolation:

function lookup_sine(x)      x1 := floor(x*1000/pi)      y1 := sine_table[x1]      y2 := sine_table[x1+1]      return y1 + (y2-y1)*(x*1000/pi-x1)

Linear interpolation provides for an interpolated function that is continuous, but will not, in general, have continuous derivatives. For smoother interpolation of table lookup that is continuous and has continuous first derivative, one should use the cubic hermite spline.

Another solution that uses a quarter of the space but takes a bit longer to compute would be to take into account the relationships between sine and cosine along with their symmetry rules. In this case, the lookup table is calculated by using the sine function for the first quadrant (i.e. sin(0..pi/2)). When we need a value, we assign a variable to be the angle wrapped to the first quadrant. We then wrap the angle to the four quadrants (not needed if values are always between 0 and 2*pi) and return the correct value (i.e. first quadrant is a straight return, second quadrant is read from pi/2-x, third and fourth are negatives of the first and second respectively). For cosine, we only have to return the angle shifted by pi/2 (i.e. x+pi/2). For tangent, we divide the sine by the cosine (divide-by-zero handling may be needed depending on implementation):

function init_sine()      for x from 0 to (360/4)+1          sine_table[x] := sine(2*pi * x / 360)    function lookup_sine(x)      x  = wrap x from 0 to 360      y := mod (x, 90)        if (x <  90) return  sine_table[   y]      if (x < 180) return  sine_table[90-y]      if (x < 270) return -sine_table[   y]                   return -sine_table[90-y]    function lookup_cosine(x)      return lookup_sine(x + 90)    function lookup_tan(x)      return (lookup_sine(x) / lookup_cosine(x))

When using interpolation, the size of the lookup table can be reduced by using nonuniform sampling , which means that where the function is close to straight, we use few sample points, while where it changes value quickly we use more sample points to keep the approximation close to the real curve. For more information, see interpolation.

Other usages of lookup tables

Caches

Storage caches (including disk caches for files, or processor caches for either code or data) work also like a lookup table. The table is built with very fast memory instead of being stored on slower external memory, and maintains two pieces of data for a sub-range of bits composing an external memory (or disk) address (notably the lowest bits of any possible external address):

A single (fast) lookup is performed to read the tag in the lookup table at the index specified by the lowest bits of the desired external storage address, and to determine if the memory address is hit by the cache. When a hit is found, no access to external memory is needed (except for write operations, where the cached value may need to be updated asynchronously to the slower memory after some time, or if the position in the cache must be replaced to cache another address).

Hardware LUTs

In digital logic, a lookup table can be implemented with a multiplexer whose select lines are driven by the address signal and whose inputs are the values of the elements contained in the array. These values can either be hard-wired, as in an ASIC which purpose is specific to a function, or provided by D latches which allow for configurable values.

An n-bit LUT can encode any n-input boolean function by storing the truth table of the function in the LUT. This is an efficient way of encoding Boolean logic functions, and LUTs with 4-6 bits of input are in fact the key component of modern field-programmable gate arrays (FPGAs) which provide reconfigurable hardware logic capabilities.

See also

Related Research Articles

Hash function type of function that can be used to map data of arbitrary size to data of fixed size

A hash function is any function that can be used to map data of arbitrary size to fixed-size values. The values returned by a hash function are called hash values, hash codes, digests, or simply hashes. Hash functions are often used in combination with a hash table, a common data structure used in computer software for rapid data lookup. Hash functions accelerate table or database lookup by detecting duplicated records in a large file. One such application is finding similar stretches in DNA sequences. They are also useful in cryptography. A cryptographic hash function allows one to easily verify whether some input data map onto a given hash value, but if the input data is unknown it is deliberately difficult to reconstruct it by knowing the stored hash value. This is used for assuring integrity of transmitted data, and is the building block for HMACs, which provide message authentication.

Hash table Associates data values with key values - a lookup table

In computing, a hash table is a data structure that implements an associative array abstract data type, a structure that can map keys to values. A hash table uses a hash function to compute an index into an array of buckets or slots, from which the desired value can be found.

Trigonometric functions function of an angle

In mathematics, the trigonometric functions are real functions which relate an angle of a right-angled triangle to ratios of two side lengths. They are widely used in all sciences that are related to geometry, such as navigation, solid mechanics, celestial mechanics, geodesy, and many others. They are among the simplest periodic functions, and as such are also widely used for studying periodic phenomena, through Fourier analysis.

A discrete cosine transform (DCT) expresses a finite sequence of data points in terms of a sum of cosine functions oscillating at different frequencies. DCTs are important to numerous applications in science and engineering, from lossy compression of images, video, and audio, to spectral methods for the numerical solution of partial differential equations. The use of cosine rather than sine functions is critical for compression, since it turns out that fewer cosine functions are needed to approximate a typical signal, whereas for differential equations the cosines express a particular choice of boundary conditions.

Trigonometric tables

In mathematics, tables of trigonometric functions are useful in a number of areas. Before the existence of pocket calculators, trigonometric tables were essential for navigation, science and engineering. The calculation of mathematical tables was an important area of study, which led to the development of the first mechanical computing devices.

A one instruction set computer (OISC), sometimes called an ultimate reduced instruction set computer (URISC), is an abstract machine that uses only one instruction – obviating the need for a machine language opcode. With a judicious choice for the single instruction and given infinite resources, an OISC is capable of being a universal computer in the same manner as traditional computers that have multiple instructions. OISCs have been recommended as aids in teaching computer architecture and have been used as computational models in structural computing research.

In mathematics, the discrete sine transform (DST) is a Fourier-related transform similar to the discrete Fourier transform (DFT), but using a purely real matrix. It is equivalent to the imaginary parts of a DFT of roughly twice the length, operating on real data with odd symmetry, where in some variants the input and/or output data are shifted by half a sample.

Pointer (computer programming) programming language data type

In computer science, a pointer is a programming language object that stores the memory address of another value located in computer memory. A pointer references a location in memory, and obtaining the value stored at that location is known as dereferencing the pointer. As an analogy, a page number in a book's index could be considered a pointer to the corresponding page; dereferencing such a pointer would be done by flipping to the page with the given page number and reading the text found on that page. The actual format and content of a pointer variable is dependent on the underlying computer architecture.

In computer science, a record is a basic data structure. Records in a database or spreadsheet are usually called "rows".

In computing, aliasing describes a situation in which a data location in memory can be accessed through different symbolic names in the program. Thus, modifying the data through one name implicitly modifies the values associated with all aliased names, which may not be expected by the programmer. As a result, aliasing makes it particularly difficult to understand, analyze and optimize programs. Aliasing analysers intend to make and compute useful information for understanding aliasing in programs.

In computing, memoization or memoisation is an optimization technique used primarily to speed up computer programs by storing the results of expensive function calls and returning the cached result when the same inputs occur again. Memoization has also been used in other contexts, such as in simple mutually recursive descent parsing. Although related to caching, memoization refers to a specific case of this optimization, distinguishing it from forms of caching such as buffering or page replacement. In the context of some logic programming languages, memoization is also known as tabling.

Prosthaphaeresis was an algorithm used in the late 16th century and early 17th century for approximate multiplication and division using formulas from trigonometry. For the 25 years preceding the invention of the logarithm in 1614, it was the only known generally applicable way of approximating products quickly. Its name comes from the Greek prosthesis (πρόσθεσις) and aphaeresis (ἀφαίρεσις), meaning addition and subtraction, two steps in the process.

3D lookup table

In the film industry, 3D lookup tables are used to map one color space to another. They are commonly used to calculate preview colors for a monitor or digital projector of how an image will be reproduced on another display device, typically the final digitally projected image or release print of a movie. A 3D LUT is a 3D lattice of output RGB color values that can be indexed by sets of input RGB colour values. Each axis of the lattice represents one of the three input color components and the input color thus defines a point inside the lattice. Since the point may not be on a lattice point, the lattice values must be interpolated; most products use trilinear interpolation.

Sine ratio between the lengths of the opposite side and the hypotenuse in a right triangle

In mathematics, the sine is a trigonometric function of an angle. The sine of an acute angle is defined in the context of a right triangle: for the specified angle, it is the ratio of the length of the side that is opposite that angle to the length of the longest side of the triangle.

In computer programming, a branch table or jump table is a method of transferring program control (branching) to another part of a program using a table of branch or jump instructions. It is a form of multiway branch. The branch table construction is commonly used when programming in assembly language but may also be generated by compilers, especially when implementing optimized switch statements whose values are densely packed together.

The polar method is a pseudo-random number sampling method for generating a pair of independent standard normal random variables. While it is superior to the Box–Muller transform, the Ziggurat algorithm is even more efficient.

Gal's accurate tables is a method devised by Shmuel Gal to provide accurate values of special functions using a lookup table and interpolation. It is a fast and efficient method for generating values of functions like the exponential or the trigonometric functions to within last-bit accuracy for almost all argument values without using extended precision arithmetic.

Computing with Memory refers to computing platforms where function response is stored in memory array, either one or two-dimensional, in the form of lookup tables (LUTs) and functions are evaluated by retrieving the values from the LUTs. These computing platforms can follow either a purely spatial computing model, as in field-programmable gate array (FPGA), or a temporal computing model, where a function is evaluated across multiple clock cycles. The latter approach aims at reducing the overhead of programmable interconnect in FPGA by folding interconnect resources inside a computing element. It uses dense two-dimensional memory arrays to store large multiple-input multiple-output LUTs. Computing with Memory differs from Computing in Memory or processor-in-memory (PIM) concepts, widely investigated in the context of integrating a processor and memory on the same chip to reduce memory latency and increase bandwidth. These architectures seek to reduce the distance the data travels between the processor and the memory. The Berkeley IRAM project is one notable contribution in the area of PIM architectures.

Precomputation

In algorithms, precomputation is the act of performing an initial computation before run time to generate a lookup table that can be used by an algorithm to avoid repeated computation each time it is executed. Precomputation is often used in algorithms that depend on the results of expensive computations that don't depend on the input of the algorithm. A trivial example of precomputation is the use of hardcoded mathematical constants, such as π and e, rather than computing their approximations to the necessary precision at run time.

References

  1. McNamee, Paul. "Automated Memoization in C++". pmcnamee.net.
  2. Campbell-Kelly, Martin; Croarken, Mary; Robson, Eleanor, eds. (October 2, 2003) [2003]. The History of Mathematical Tables From Sumer to Spreadsheets (1st ed.). New York, USA. ISBN   978-0-19-850841-0.
  3. Maher, David. W. J. and John F. Makowski. "Literary Evidence for Roman Arithmetic With Fractions", 'Classical Philology' (2001) Vol. 96 No. 4 (2001) pp. 376–399. (See page p.383.)
  4. Bill Jelen: “From 1979 – VisiCalc and LOOKUP”!, by MrExcel East, March 31, 2012