In computing, a normal number is a non-zero number in a floating-point representation which is within the balanced range supported by a given floating-point format: it is a floating point number that can be represented without leading zeros in its significand.
The magnitude of the smallest normal number in a format is given by:
where b is the base (radix) of the format (like common values 2 or 10, for binary and decimal number systems), and depends on the size and layout of the format.
Similarly, the magnitude of the largest normal number in a format is given by
where p is the precision of the format in digits and is related to as:
In the IEEE 754 binary and decimal formats, b, p, , and have the following values:[1]
Smallest and Largest Normal Numbers for common numerical Formats
This page is based on this Wikipedia article Text is available under the CC BY-SA 4.0 license; additional terms may apply. Images, videos and audio are available under their respective licenses.