A logarithmic number system (LNS) is an arithmetic system used for representing real numbers in computer and digital hardware, especially for digital signal processing.
A number, , is represented in an LNS by two components: the logarithm () of its absolute value (as a binary word usually in two's complement), and its sign bit ():
An LNS can be considered as a floating-point number with the significand being always equal to 1 and a non-integer exponent. This formulation simplifies the operations of multiplication, division, powers and roots, since they are reduced down to addition, subtraction, multiplication, and division, respectively.
On the other hand, the operations of addition and subtraction are more complicated and they are calculated by the formulae:
where the "sum" function is defined by , and the "difference" function by . These functions and are also known as Gaussian logarithms.
The simplification of multiplication, division, roots, and powers is counterbalanced by the cost of evaluating these functions for addition and subtraction. This added cost of evaluation may not be critical when using an LNS primarily for increasing the precision of floating-point math operations.
Logarithmic number systems have been independently invented and published at least three times as an alternative to fixed-point and floating-point number systems. [1]
Nicholas Kingsbury and Peter Rayner introduced "logarithmic arithmetic" for digital signal processing (DSP) in 1971. [2]
A similar LNS named "signed logarithmic number system" (SLNS) was described in 1975 by Earl Swartzlander and Aristides Alexopoulos; rather than use two's complement notation for the logarithms, they offset them (scale the numbers being represented) to avoid negative logs. [3]
Samuel Lee and Albert Edgar described a similar system, which they called the "Focus" number system, in 1977. [4] [1] [5] [6]
The mathematical foundations for addition and subtraction in an LNS trace back to Zecchini Leonelli and Carl Friedrich Gauss in the early 1800s. [7] [8] [9] [10] [11]
In the late 1800s, the Spanish engineer Leonardo Torres Quevedo conceived a series of analogue calculating mechanical machines [12] [13] and developed one that could solve algebraic equations with eight terms, finding the roots, including the complex ones. One part of this machine called an "endless spindle" allowed the mechanical expression of the relation , [14] with the aim of extracting the logarithm of a sum as a sum of logarithms.
A LNS has been used in the Gravity Pipe (GRAPE-5) special-purpose supercomputer [15] that won the Gordon Bell Prize in 1999.
A substantial effort to explore the applicability of LNSs as a viable alternative to floating point for general-purpose processing of single-precision real numbers is described in the context of the European Logarithmic Microprocessor (ELM). [16] [17] A fabricated prototype of the processor, which has a 32-bit cotransformation-based LNS arithmetic logic unit (ALU), demonstrated LNSs as a "more accurate alternative to floating-point", with improved speed. Further improvement of the LNS design based on the ELM architecture has shown its capability to offer significantly higher speed and accuracy than floating-point as well. [18]
LNSs are sometimes used in FPGA-based applications where most arithmetic operations are multiplication or division. [19]
In computing, floating-point arithmetic (FP) is arithmetic that represents subsets of real numbers using an integer with a fixed precision, called the significand, scaled by an integer exponent of a fixed base. Numbers of this form are called floating-point numbers. For example, 12.345 is a floating-point number in base ten with five digits of precision:
In mathematics, the logarithm is the inverse function to exponentiation. That means that the logarithm of a number x to the base b is the exponent to which b must be raised to produce x. For example, since 1000 = 103, the logarithm base 10 of 1000 is 3, or log10 (1000) = 3. The logarithm of x to base b is denoted as logb (x), or without parentheses, logb x, or even without the explicit base, log x, when no confusion is possible, or when the base does not matter such as in big O notation.
The natural logarithm of a number is its logarithm to the base of the mathematical constant e, which is an irrational and transcendental number approximately equal to 2.718281828459. The natural logarithm of x is generally written as ln x, logex, or sometimes, if the base e is implicit, simply log x. Parentheses are sometimes added for clarity, giving ln(x), loge(x), or log(x). This is done particularly when the argument to the logarithm is not a single symbol, so as to prevent ambiguity.
A multiplication algorithm is an algorithm to multiply two numbers. Depending on the size of the numbers, different algorithms are more efficient than others. Efficient multiplication algorithms have existed since the advent of the decimal numeral system.
Rounding means replacing a number with an approximate value that has a shorter, simpler, or more explicit representation. For example, replacing $23.4476 with $23.45, the fraction 312/937 with 1/3, or the expression √2 with 1.414.
In mathematics, the binary logarithm is the power to which the number 2 must be raised to obtain the value n. That is, for any real number x,
In mathematics, an identity is an equality relating one mathematical expression A to another mathematical expression B, such that A and B produce the same value for all values of the variables within a certain range of validity. In other words, A = B is an identity if A and B define the same functions, and an identity is an equality between functions that are differently defined. For example, and are identities. Identities are sometimes indicated by the triple bar symbol ≡ instead of =, the equals sign. Formally, an identity is a universally quantified equality.
The IEEE Standard for Floating-Point Arithmetic is a technical standard for floating-point arithmetic established in 1985 by the Institute of Electrical and Electronics Engineers (IEEE). The standard addressed many problems found in the diverse floating-point implementations that made them difficult to use reliably and portably. Many hardware floating-point units use the IEEE 754 standard.
The significand is part of a number in scientific notation or in floating-point representation, consisting of its significant digits. Depending on the interpretation of the exponent, the significand may represent an integer or a fraction.
CORDIC, also known as Volder's algorithm, or: Digit-by-digit methodCircular CORDIC, Linear CORDIC, Hyperbolic CORDIC, and Generalized Hyperbolic CORDIC, is a simple and efficient algorithm to calculate trigonometric functions, hyperbolic functions, square roots, multiplications, divisions, and exponentials and logarithms with arbitrary base, typically converging with one digit per iteration. CORDIC is therefore also an example of digit-by-digit algorithms. CORDIC and closely related methods known as pseudo-multiplication and pseudo-division or factor combining are commonly used when no hardware multiplier is available, as the only operations it requires are additions, subtractions, bitshift and lookup tables. As such, they all belong to the class of shift-and-add algorithms. In computer science, CORDIC is often used to implement floating-point arithmetic when the target platform lacks hardware multiply for cost or space reasons.
A residue numeral system (RNS) is a numeral system representing integers by their values modulo several pairwise coprime integers called the moduli. This representation is allowed by the Chinese remainder theorem, which asserts that, if M is the product of the moduli, there is, in an interval of length M, exactly one integer having any given set of modular values. The arithmetic of a residue numeral system is also called multi-modular arithmetic.
In computer science, the iterated logarithm of , written log* , is the number of times the logarithm function must be iteratively applied before the result is less than or equal to . The simplest formal definition is the result of this recurrence relation:
The BKM algorithm is a shift-and-add algorithm for computing elementary functions, first published in 1994 by Jean-Claude Bajard, Sylvanus Kla, and Jean-Michel Muller. BKM is based on computing complex logarithms (L-mode) and exponentials (E-mode) using a method similar to the algorithm Henry Briggs used to compute logarithms. By using a precomputed table of logarithms of negative powers of two, the BKM algorithm computes elementary functions using only integer add, shift, and compare operations.
The level-index (LI) representation of numbers, and its algorithms for arithmetic operations, were introduced by Charles Clenshaw and Frank Olver in 1984.
In mathematics, the super-logarithm is one of the two inverse functions of tetration. Just as exponentiation has two inverse functions, roots and logarithms, tetration has two inverse functions, super-roots and super-logarithms. There are several ways of interpreting super-logarithms:
In numerical analysis, catastrophic cancellation is the phenomenon that subtracting good approximations to two nearby numbers may yield a very bad approximation to the difference of the original numbers.
Fast inverse square root, sometimes referred to as Fast InvSqrt or by the hexadecimal constant 0x5F3759DF, is an algorithm that estimates , the reciprocal of the square root of a 32-bit floating-point number in IEEE 754 floating-point format. The algorithm is best known for its implementation in 1999 in Quake III Arena, a first-person shooter video game heavily based on 3D graphics. With subsequent hardware advancements, especially the x86 SSE instruction rsqrtss
, this algorithm is not generally the best choice for modern computers, though it remains an interesting historical example.
In mathematics, addition and subtraction logarithms or Gaussian logarithms can be utilized to find the logarithms of the sum and difference of a pair of values whose logarithms are known, without knowing the values themselves.
In science and engineering, a power level and a field level are logarithmic magnitudes of certain quantities referenced to a standard reference value of the same type.
In computing, tapered floating point (TFP) is a format similar to floating point, but with variable-sized entries for the significand and exponent instead of the fixed-length entries found in normal floating-point formats. In addition to this, tapered floating-point formats provide a fixed-size pointer entry indicating the number of digits in the exponent entry. The number of digits of the significand entry results from the difference of the fixed total length minus the length of the exponent and pointer entries.