This article possibly contains original research .(September 2019) (Learn how and when to remove this template message) |

In computing, a **fixed-point number** representation is a real data type for a number that has a fixed number of digits after (and sometimes also before) the radix point (after the decimal point '.' in English decimal notation). Fixed-point number representation can be compared to the more complicated (and more computationally demanding) floating-point number representation.

- Representation
- Operations
- Binary vs. decimal
- A worked Example
- Notation
- Precision loss and overflow
- Computer language implementations
- Software application examples
- See also
- References
- Further reading
- External links

Fixed-point numbers are useful for representing fractional values, usually in base 2 or base 10, when the executing processor has no floating point unit (FPU) as is the case for older or low-cost embedded microprocessors and microcontrollers, if fixed-point provides improved performance or accuracy for the application at hand, or if their use is more natural for the problem (such as for the representation of angles).

Displayed value | Integer handled internally |
---|---|

0.00 | 0 |

0.01 | 1 |

0.02 | 2 |

... | |

0.99 | 99 |

1.00 | 100 |

A value of a fixed-point data type is essentially an integer that is scaled by an implicit specific factor determined by the type. For example, the value 1.23 can be represented as 1230 in a fixed-point data type with scaling factor of 1/1000, and the value 1,230,000 can be represented as 1230 with a scaling factor of 1000. Unlike floating-point data types, the scaling factor is the same for all values of the same type, and does not change during the entire computation.

The scaling factor is usually a power of 10 (for human convenience) or a power of 2 (for computational efficiency). However, other scaling factors may be used occasionally, e.g. a time value in hours may be represented as a fixed-point type with a scale factor of 1/3600 to obtain values with one-second accuracy.

The maximum value of a fixed-point type is simply the largest value that can be represented in the underlying integer type multiplied by the scaling factor; and similarly for the minimum value.

To convert a number from a fixed point type with scaling factor *R* to another type with scaling factor *S*, the underlying integer must be multiplied by *R* and divided by *S*; that is, multiplied by the ratio *R*/*S*. Thus, for example, to convert the value 1.23 = 123/100 from a type with scaling factor *R*=1/100 to one with scaling factor *S*=1/1000, the underlying integer 123 must be multiplied by (1/100)/(1/1000) = 10, yielding the representation 1230/1000. If *S* does not divide *R* (in particular, if the new scaling factor *S* is greater than the original *R*), the new integer will have to be rounded. The rounding rules and methods are usually part of the language's specification.

To add or subtract two values of the same fixed-point type, it is sufficient to add or subtract the underlying integers, and keep their common scaling factor. The result can be exactly represented in the same type, as long as no overflow occurs (i.e. provided that the sum of the two integers fits in the underlying integer type). If the numbers have different fixed-point types, with different scaling factors, then one of them must be converted to the other before the sum.

To multiply two fixed-point numbers, it suffices to multiply the two underlying integers, and assume that the scaling factor of the result is the product of their scaling factors. This operation involves no rounding. For example, multiplying the numbers 123 scaled by 1/1000 (0.123) and 25 scaled by 1/10 (2.5) yields the integer 123×25 = 3075 scaled by (1/1000)×(1/10) = 1/10000, that is 3075/10000 = 0.3075. If the two operands belong to the same fixed-point type, and the result is also to be represented in that type, then the product of the two integers must be explicitly multiplied by the common scaling factor; in this case the result may have to be rounded, and overflow may occur. For example, if the common scaling factor is 1/100, multiplying 1.23 by 0.25 entails multiplying 123 by 25 to yield 3075 with an intermediate scaling factor of 1/10000. This then must be multiplied by 1/100 to yield either 31 (0.31) or 30 (0.30), depending on the rounding method used, to result in a final scale factor of 1/100.

To divide two fixed-point numbers, one takes the integer quotient of their underlying integers, and assumes that the scaling factor is the quotient of their scaling factors. The first division involves rounding in general. For example, division of 3456 scaled by 1/100 (34.56) and 1234 scaled by 1/1000 (1.234) yields the integer 3456÷1234 = 3 (rounded) with scale factor (1/100)/(1/1000) = 10, that is, 30. One can obtain a more accurate result by first converting the dividend to a more precise type: in the same example, converting 3456 scaled by 1/100 (34.56) to 3,456,000 scaled by 1/100000, before dividing by 1234 scaled by 1/1000 (1.234), would yield 3456000÷1234 = 2801 (rounded) with scaling factor (1/100000)/(1/1000) = 1/100, that is 28.01 (instead of 30). If both operands and the desired result all have the same scaling factor, then the quotient of the two integers must be explicitly multiplied by that common scaling factor.

The two most common classes of fixed-point types are decimal and binary. Decimal fixed-point types have a scaling factor that is a power of ten; for binary fixed-point types it is a power of two.

Binary fixed-point types are most commonly used, because the rescaling operations can be implemented as fast bit shifts. Binary fixed-point numbers can represent fractional powers of two exactly, but, like binary floating-point numbers, cannot exactly represent fractional powers of ten. If exact fractional powers of ten are desired, then a decimal format should be used. For example, one-tenth (0.1) and one-hundredth (0.01) can be represented only approximately by binary fixed-point or binary floating-point representations, while they can be represented exactly in decimal fixed-point or decimal floating-point representations. These representations may be encoded in many ways, including binary-coded decimal (BCD).

Suppose there is the following multiplication with 2 fixed point 3 decimal place numbers.

(10.500)(1.050) =1*10.500 + 0.050*10.500 = 10.500+0.525000=11.025000

Note how since there are 3 decimal places we show the trailing zeros. To re-characterize this as an integer multiplication we must first multiply by 1000 (10^3) moving all the decimal places in to integer places, then we will multiply by (10^-3) to put them back the equation now looks like

(10.500)(10^(3))(10^(-3)) (1.050)(10^(3))(10^(-3)) (10^(-3))(10^(-3)) = (10500)(1050) (10^-6) = 11 025 000 = 11.025000

This works equivalently if we choose a different base, notably base 2 for computing, since a bit shift is the same as a multiplication or division by an order of 2. Three decimal digits is equivalent to about 10 binary digits, so we should round 0.05 to 10 bits after the binary point. The closest approximation is then 0.0000110011.

10= 8+2=2^3+2^1 1=2^0 0.5= 2^-1 0.05= 0.0000110011_2

Thus our multiplication becomes

(1010.100)(2^3)(1.0000110011)(2^10) (2^-13) =(1010100)(10000110011) (2^-13) =(10110000010111100) (2^-13) =1011.0000010111100

This rounds to 11.023 with three digits after the decimal point.

There are various notations used to represent word length and radix point in a binary fixed-point number. In the following list, *f* represents the number of fractional bits, *m* the number of magnitude or integer bits, *s* the number of sign bits, and *b* the total number of bits.

**Q***f*: The "Q" prefix. For example, Q15 represents a number with 15 fractional bits. This notation is ambiguous since it does not specify the word length, however it is usually assumed that the word length is either 16 or 32 bits depending on the target processor in use.^{ [1] }**Q***m***.***f*: The unambiguous form of the "Q" notation. Since the entire word is a 2's complement integer, a sign bit is implied. For example, Q1.30 describes a number with 1 integer bit and 30 fractional bits stored as a 32-bit 2's complement integer.^{ [1] }^{ [2] }**fx***m***.***b*: The "fx" prefix is similar to the above, but uses the word length as the second item in the dotted pair. For example, fx1.16 describes a number with 1 magnitude bit and 15 fractional bits in a 16 bit word.^{ [3] }*s***:***m***:***f*: Yet other notations include a sign bit, such as this one used in the PS2 GS User's Guide.^{ [4] }It also differs from conventional usage by using a colon instead of a period as the separator. For example, in this notation, 0:8:0 represents an unsigned 8-bit integer.**(p,q)**Used in the*PL/I*programming language, to specify**p**total digits (not including sign) with**q**after the radix point.**q**can be positive or negative, and the radix binary or decimal.**<s,p,i>**As used with the LabVIEW programming language, for 'FXP' fixed point numbers. Where**s**is either + or ±, signifying either an unsigned or 2's complement signed number respectively. This format indicates**p**total digits with**i**being the 'integer part'. It is notable that this format allows for arbitrary placement of the 'units' place - which need not occur within the given digits. That is; whilst the total bits**p**must be a natural integer,**i**may be larger, zero, or even negative - these situations merely correspond to different overall scaling factors for the number.

Because fixed point operations can produce results that have more digits than the operands, information loss is possible. For instance, the result of fixed point multiplication could potentially have as many digits as the sum of the number of digits in the two operands. In order to fit the result into the same number of digits as the operands, the answer must be rounded or truncated. If this is the case, the choice of which digits to keep is very important. When multiplying two fixed point numbers with the same format, for instance with integer digits, and fractional digits, the answer could have up to integer digits, and fractional digits.

For simplicity, many fixed-point multiply procedures use the same result format as the operands. This has the effect of keeping the middle digits; the `I`-number of least significant integer digits, and the `Q`-number of most significant fractional digits. Fractional digits lost below this value represent a precision loss which is common in fractional multiplication. If any integer digits are lost, however, the value will be radically inaccurate. Some model-based fixed-point packages^{ [5] } allow specifying a result format different from the input formats, enabling the user to maximize precision and avoid overflow.

Some operations, like divide, often have built-in result limiting so that any positive overflow results in the largest possible number that can be represented by the current format. Likewise, negative overflow results in the largest negative number represented by the current format. This built in limiting is often referred to as *saturation*.

Some processors support a hardware overflow flag that can generate an exception on the occurrence of an overflow, but it is usually too late to salvage the proper result at this point.

Very few computer languages include built-in support for fixed point values other than with the radix point immediately to the right of the least significant digit (i.e. integer), because for most applications, binary or decimal floating-point representations are usually simpler to use and accurate enough. Floating-point representations are easier to use than fixed-point representations, because they can handle a wider dynamic range and do not require programmers to specify the number of digits after the radix point. However, if they are needed, fixed-point numbers can be implemented even in programming languages like C and C++, which do not commonly include such support.

A common use of fixed-point BCD numbers is for storing monetary values, where the inexact values of binary floating-point numbers are often a liability. Historically, fixed-point representations were the norm for decimal data types; for example, in PL/I or COBOL. The Ada programming language includes built-in support for both fixed-point (binary and decimal) and floating-point. JOVIAL and Coral 66 also provide both floating- and fixed-point types.

ISO/IEC TR 18037^{ [6] } specifies fixed-point data types for the C programming language; vendors are expected to implement the language extensions for fixed point arithmetic in coming years. Fixed-point support is implemented in GCC.^{ [7] }^{ [8] }

Fixed-point should not be confused with Decimal floating point in programming languages like C# and Python.

Almost all relational databases, and the SQL, support fixed-point decimal arithmetic and storage of numbers. PostgreSQL has a special `numeric` type for exact storage of numbers with up to 1000 digits.^{ [9] }

This section may contain indiscriminate, excessive, or irrelevant examples.(August 2018) |

- The Nest Labs Utilities library, provides a limited set of macros and functions for fixed point numbers, particularly when dealing with those numbers in the context of sensor sampling and sensor outputs.
- GnuCash is an application for tracking money which is written in C. It switched from a floating-point representation of money to a fixed-point implementation as of version 1.6. This change was made to trade the less predictable rounding errors of floating-point representations for more control over rounding (for example, to the nearest cent).
- Tremor, Toast and MAD are software libraries which decode the Ogg Vorbis, GSM Full Rate and MP3 audio formats respectively. These codecs use fixed-point arithmetic because many audio decoding hardware devices do not have an FPU (partly to save money, but primarily to save power - integer units are much smaller in silicon area than an FPU) and audio decoding requires performance to the extent a software implementation of floating-point on low-speed devices would not produce output in real time.
- All 3D graphics engines on Sony's original PlayStation, Sega's Saturn, Nintendo's Game Boy Advance (only 2D), Nintendo DS (2D and 3D), Nintendo Gamecube
^{ [10] }and GP2X Wiz video game systems use fixed-point arithmetic for the same reason as Tremor and Toast: to gain throughput on an architecture without an FPU (e.g. the PlayStation included hardware support for 4.12bit values in its transformation coprocessor). - The OpenGL ES 1.x specification includes a fixed point profile, as it is an API aimed for embedded systems, which do not always have an FPU.
- TeX uses fixed point with 16 bits after the binary point, in units of points, for all position calculations. TeX font metric files use 32-bit signed fixed-point numbers, with 12 bits to the left of the decimal.
- The dc and bc programs are arbitrary precision calculators, but only keep track of a (user-specified) fixed number of fractional digits.
- VisSim A visually programmed block diagram language supporting a fixed-point block set to allow simulation and automatic code generation of fixed-point operations. Both word size and radix point can be specified on an operator basis.
- Fractint represents numbers as Q2.29 fixed-point numbers,
^{ [11] }to speed up drawing on old PCs with 386 or 486SX processors, which lacked an FPU. *Doom*was the last first-person shooter title by id Software to use a 16.16 fixed point representation for all of its non-integer computations, including map system, geometry, rendering, player movement etc. This was done in order for the game to be playable on 386 and 486SX CPUs without an FPU. For compatibility reasons, this representation is still used in modern*Doom*source ports.- Wavpack is a fixed-point, lossless audio compressor. Its creator, David Bryant, justifies the decision to implement fixed instead of floating point: " I believe that integer operations are less susceptible to subtle chip to chip variations that could corrupt the lossless nature of the compression."
^{ [12] } - fixed point numbers are sometimes used for storing and manipulating images and video frames. Processors with SIMD units aimed at image processing may include instructions suitable for handling packed fixed point data.
- The Q# programming language for the Azure quantum computers, that implement quantum logic gates, contains a standard numeric library for performing fixed-point arithmetic on registers of qubits.
^{ [13] } - The TrueType font format uses the
*F26Dot6*format (32-bit signed fixed-point with 26 bits to the left of the decimal) for some numeric values in its instructions.^{ [14] }This format was chosen to provide the minimal amount of precision required for hinting and for performance reasons.^{ [15] } - The FixedPointNumbers package for Julia implements both 2
^{n}and 2^{n}-1 scaled fixed point numbers. The latter are used as a foundation for image processing to provide a consistent scale for both "integer" and floating-point intensity values, thus avoiding the implicit "255 == 1.0" (non-equation) present in many other image processing frameworks.

- Binary scaling
- Q (number format)
- Libfixmath - a library written in C for fixed-point math
- Floating-point arithmetic
- Logarithmic number system

In computing and electronic systems, **binary-coded decimal** (**BCD**) is a class of binary encodings of decimal numbers where each digit is represented by a fixed number of bits, usually four or eight. Sometimes, special bit patterns are used for a sign or other indications.

In computing, **floating-point arithmetic** (**FP**) is arithmetic using formulaic representation of real numbers as an approximation to support a trade-off between range and precision. For this reason, floating-point computation is often found in systems which include very small and very large real numbers, which require fast processing times. A number is, in general, represented approximately to a fixed number of significant digits and scaled using an exponent in some fixed base; the base for the scaling is normally two, ten, or sixteen. A number that can be represented exactly is of the following form:

The **octal** numeral system, or **oct** for short, is the base-8 number system, and uses the digits 0 to 7. Octal numerals can be made from binary numerals by grouping consecutive binary digits into groups of three. For example, the binary representation for decimal 74 is 1001010. Two zeroes can be added at the left: (00)1 001 010, corresponding the octal digits 1 1 2, yielding the octal representation 112.

A **computer number format** is the internal representation of numeric values in digital device hardware and software, such as in programmable computers and calculators. Numerical values are stored as groupings of bits, such as bytes and words. The encoding between numerical values and bit patterns is chosen for convenience of the operation of the computer; the encoding used by the computer's instruction set generally requires conversion for external use, such as for printing and display. Different types of processors may have different internal representations of numerical values and different conventions are used for integer and real numbers. Most calculations are carried out with number formats that fit into a processor register, but some software systems allow representation of arbitrarily large numbers using multiple words of memory.

**Double-precision floating-point format** is a computer number format, usually occupying 64 bits in computer memory; it represents a wide dynamic range of numeric values by using a floating radix point.

**Rounding** means replacing a number with an approximate value that has a shorter, simpler, or more explicit representation. For example, replacing $23.4476 with $23.45, the fraction 312/937 with 1/3, or the expression √2 with 1.414.

In mathematics and digital electronics, a **binary number** is a number expressed in the **base-2 numeral system** or **binary numeral system**, which uses only two symbols: typically "0" (zero) and "1" (one).

The **IEEE Standard for Floating-Point Arithmetic** is a technical standard for floating-point arithmetic established in 1985 by the Institute of Electrical and Electronics Engineers (IEEE). The standard addressed many problems found in the diverse floating-point implementations that made them difficult to use reliably and portably. Many hardware floating-point units use the IEEE 754 standard.

The **significand** is part of a number in scientific notation or a floating-point number, consisting of its significant digits. Depending on the interpretation of the exponent, the significand may represent an integer or a fraction.

In computer science, **arbitrary-precision arithmetic**, also called **bignum arithmetic**, **multiple-precision arithmetic**, or sometimes **infinite-precision arithmetic**, indicates that calculations are performed on numbers whose digits of precision are limited only by the available memory of the host system. This contrasts with the faster fixed-precision arithmetic found in most arithmetic logic unit (ALU) hardware, which typically offers between 8 and 64 bits of precision.

The **Intel BCD opcodes** are a set of six x86 instructions that operate with binary coded decimal numbers. The radix used for the representation of numbers in the x86 processors is 2. This is called a binary numeral system. However, the x86 processors do have limited support for the decimal numeral system.

**Quote notation** is a representation of the rational numbers based on Kurt Hensel's p-adic numbers. In quote notation, arithmetic operations take particularly simple, consistent forms, producing exact answers with no roundoff error. Quote notation’s arithmetic algorithms work in a right-to-left direction; addition, subtraction, and multiplication algorithms are the same as for natural numbers, and division is easier than the usual division algorithm. The notation was invented by Eric Hehner of the University of Toronto and Nigel Horspool, then at McGill University, and published in the SIAM *Journal on Computing*, v.8, n.2, May 1979, pp. 124–134.

In computer science, a **scale factor** is a number used as a multiplier to represent a number on a different scale, functioning similarly to an exponent in mathematics. A scale factor is used when a real-world set of numbers needs to be represented on a different scale in order to fit a specific number format. Although using a scale factor extends the range of representable values, it also decreases the precision, resulting in rounding error for certain calculations.

**Q** is a binary fixed point number format where the number of fractional bits is specified. For example, a Q15 number has 15 fractional bits; a Q1.14 number has 1 integer bit and 14 fractional bits. Q format is often used in hardware that does not have a floating-point unit and in applications that require constant resolution.

**Extended precision** refers to floating-point number formats that provide greater precision than the basic floating-point formats. Extended precision formats support a basic format by minimizing roundoff and overflow errors in intermediate values of expressions on the base format. In contrast to *extended precision*, arbitrary-precision arithmetic refers to implementations of much larger numeric types using special software.

**Binary scaling** is a computer programming technique used typically in embedded C, DSP and assembler programs to implement non-integer operations by using the native integer arithmetic of the processor.

**IEEE 754-2008** was published in August 2008 and is a significant revision to, and replaces, the IEEE 754-1985 floating-point standard, while in 2019 it was updated with a minor revision IEEE 754-2019. The 2008 revision extended the previous standard where it was necessary, added decimal arithmetic and formats, tightened up certain areas of the original standard which were left undefined, and merged in IEEE 854.

In computing, **quadruple precision** is a binary floating point–based computer number format that occupies 16 bytes with precision at least twice the 53-bit double precision.

**Single-precision floating-point format** is a computer number format, usually occupying 32 bits in computer memory; it represents a wide dynamic range of numeric values by using a floating radix point.

Some programming languages provide a built-in (primitive) or library **decimal data type** to represent non-repeating decimal fractions like 0.3 and -1.17 without rounding, and to do arithmetic on them. Examples are the `decimal.Decimal`

type of Python, and analogous types provided by other languages.

- 1 2 Texas Instruments, TMS320C64x DSP Library Programmer's Reference, Appendix A.2
- ↑ "MathWorks Fixed-Point Toolbox Documentation Glossary".
*mathworks.com*. - ↑ Inc., solidThinking. "VisSim is now solidThinking Embed".
*www.vissim.com*. - ↑ PS2 GS User's Guide, Chapter 7.1 "Explanatory Notes"
- ↑ VisSim Fixed-Point User Guide|http://www.vissim.com/downloads/doc/EmbeddedControlsDeveloper_UGv80.pdf
- ↑ JTC1/SC22/WG14, status of TR 18037: Embedded C
- ↑ GCC wiki, Fixed-Point Arithmetic Support
- ↑ Using GCC, section 5.13 Fixed-Point Types
- ↑ PostgreSQL manual, section 8.1.2. Arbitrary Precision Numbers
- ↑ "Dolphin Emulator".
*Dolphin Emulator*. - ↑ Fractint, A Little Code
- ↑ "WavPack Technical Description".
*www.wavpack.com*. Retrieved 2015-07-13. - ↑ "Introduction to the Quantum Numerics Library" . Retrieved 2019-11-13.
- ↑ "The TrueType Instruction Set: Data types".
- ↑ "[Freetype] Why 26.6 ?".

- Warren Jr., Henry S. (2013).
*Hacker's Delight*(2 ed.). Addison Wesley - Pearson Education, Inc. ISBN 978-0-321-84268-8.

The Wikibook Floating Point has a page on the topic of: Fixed-Point Numbers |

The Wikibook Embedded Systems has a page on the topic of: Fixed-Point Arithmetic |

This page is based on this Wikipedia article

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.