WikiMili The Free Encyclopedia

Numeral systems |
---|

Hindu–Arabic numeral system |

East Asian |

Alphabetic |

Former |

Positional systems by base |

Non-standard positional numeral systems |

List of numeral systems |

In mathematics and digital electronics, a **binary number** is a number expressed in the **base-2 numeral system** or **binary numeral system**, which uses only two symbols: typically "0" (zero) and "1" (one).

**Digital electronics**, **digital technology** or **digital (electronic) circuits** are electronics that operate on digital signals. In contrast, analog circuits manipulate analog signals whose performance is more subject to manufacturing tolerance, signal attenuation and noise. Digital techniques are helpful because it is a lot easier to get an electronic device to switch into one of a number of known states than to accurately reproduce a continuous range of values.

A **number** is a mathematical object used to count, measure, and label. The original examples are the natural numbers 1, 2, 3, 4, and so forth. A written symbol like "5" that represents a number is called a numeral. A numeral system is an organized way to write and manipulate this type of symbol, for example the Hindu–Arabic numeral system allows combinations of numerical digits like "5" and "0" to represent larger numbers like 50. A numeral in linguistics can refer to a symbol like 5, the words or phrase that names a number, like "five hundred", or other words that mean a specific number, like "dozen". In addition to their use in counting and measuring, numerals are often used for labels, for ordering, and for codes. In common usage, *number* may refer to a symbol, a word or phrase, or the mathematical object.

- History
- Egypt
- China
- India
- Other cultures
- Western predecessors to Leibniz
- Leibniz and the I Ching
- Later developments
- Representation
- Counting in binary
- Decimal counting
- Binary counting
- Fractions
- Binary arithmetic
- Addition
- Subtraction
- Multiplication
- Division
- Square root
- Bitwise operations
- Conversion to and from other numeral systems
- Decimal
- Hexadecimal
- Octal
- Representing real numbers
- See also
- References
- Further reading
- External links

The base-2 numeral system is a positional notation with a radix of 2. Each digit is referred to as a bit. Because of its straightforward implementation in digital electronic circuitry using logic gates, the binary system is used by almost all modern computers and computer-based devices.

**Positional notation** denotes usually the extension to any base of the Hindu–Arabic numeral system. More generally, a positional system is a numeral system in which the contribution of a digit to the value of a number is the product of the value of the digit by a factor determined by the *position of the digit*. In early numeral systems, such as Roman numerals, a digit has only one value: I means one, X means ten and C a hundred. In modern positional systems, such as the decimal system, the *position* of the digit means that its value must be multiplied by some value: in 555, the three identical symbols represent five hundreds, five tens, and five units, respectively, due to their different *positions* in the digit string.

In digital numeral systems, the **radix** or **base** is the number of unique digits, including the digit zero, used to represent numbers in a positional numeral system. For example, for the decimal/denary system the radix is ten, because it uses the ten digits from 0 through 9.

The **bit** is a basic unit of information in information theory, computing, and digital communications. The name is a portmanteau of **binary digit**.

The modern binary number system was studied in Europe in the 16th and 17th centuries by Thomas Harriot, Juan Caramuel y Lobkowitz, and Gottfried Leibniz. However, systems related to binary numbers have appeared earlier in multiple cultures including ancient Egypt, China, and India. Leibniz was specifically inspired by the Chinese I Ching.

**Thomas Harriot**, also spelled **Harriott**, **Hariot** or **Heriot**, was an English astronomer, mathematician, ethnographer and translator who made advances within the scientific field. Thomas Harriot was recognized for his contributions in astronomy, mathematics, and navigational techniques. Harriot worked closely with John White to create advanced maps for navigation. While Harriot worked extensively on numerous papers on the subjects of astronomy, mathematics, and navigation the amount of work that was actually published was sparse. So sparse that the only publication that has been produced by Harriot was *The Briefe and True Report of the New Found Land of Virginia*. The premise of the book includes descriptions of English settlements and financial issues in Virginia at the time. He is sometimes credited with the introduction of the potato to the British Isles. Harriot was the first person to make a drawing of the Moon through a telescope, on 26 July 1609, over four months before Galileo.

**Juan Caramuel y Lobkowitz** was a Spanish Catholic scholastic philosopher, ecclesiastic, mathematician and writer. He is believed to be a great-grandson of Jan Popel z Lobkowicz.

The * I Ching* or

The scribes of ancient Egypt used two different systems for their fractions, Egyptian fractions (not related to the binary number system) and Horus-Eye fractions (so called because many historians of mathematics believe that the symbols used for this system could be arranged to form the eye of Horus, although this has been disputed).^{ [1] } Horus-Eye fractions are a binary numbering system for fractional quantities of grain, liquids, or other measures, in which a fraction of a hekat is expressed as a sum of the binary fractions 1/2, 1/4, 1/8, 1/16, 1/32, and 1/64. Early forms of this system can be found in documents from the Fifth Dynasty of Egypt, approximately 2400 BC, and its fully developed hieroglyphic form dates to the Nineteenth Dynasty of Egypt, approximately 1200 BC.^{ [2] }

An **Egyptian fraction** is a finite sum of distinct unit fractions, such as

The **Eye of Horus**, also known as **wadjet**, **wedjat** or **udjat**, is an ancient Egyptian symbol of protection, royal power, and good health. The Eye of Horus is similar to the Eye of Ra, which belongs to a different god, Ra, but represents many of the same concepts.

**Horus** or **Her, Heru, Hor** in Ancient Egyptian, is one of the most significant ancient Egyptian deities who served many functions, most notably god of kingship and the sky. He was worshipped from at least the late prehistoric Egypt until the Ptolemaic Kingdom and Roman Egypt. Different forms of Horus are recorded in history and these are treated as distinct gods by Egyptologists. These various forms may possibly be different manifestations of the same multi-layered deity in which certain attributes or syncretic relationships are emphasized, not necessarily in opposition but complementary to one another, consistent with how the Ancient Egyptians viewed the multiple facets of reality. He was most often depicted as a falcon, most likely a lanner falcon or peregrine falcon, or as a man with a falcon head.

The method used for ancient Egyptian multiplication is also closely related to binary numbers. In this method, multiplying one number by a second is performed by a sequence of steps in which a value (initially the first of the two numbers) is either doubled or has the first number added back into it; the order in which these steps are to be performed is given by the binary representation of the second number. This method can be seen in use, for instance, in the Rhind Mathematical Papyrus, which dates to around 1650 BC.^{ [3] }

In mathematics, **ancient Egyptian multiplication**, one of two multiplication methods used by scribes, was a systematic method for multiplying two numbers that does not require the multiplication table, only the ability to multiply and divide by 2, and to add. It decomposes one of the multiplicands into a sum of powers of two and creates a table of doublings of the second multiplicand. This method may be called **mediation and duplation**, where mediation means halving one number and duplation means doubling the other number. It is still used in some areas.

The **Rhind Mathematical Papyrus** is one of the best known examples of Ancient Egyptian mathematics. It is named after Alexander Henry Rhind, a Scottish antiquarian, who purchased the papyrus in 1858 in Luxor, Egypt; it was apparently found during illegal excavations in or near the Ramesseum. It dates to around 1550 BC. The British Museum, where the majority of papyrus is now kept, acquired it in 1865 along with the Egyptian Mathematical Leather Roll, also owned by Henry Rhind; there are a few small fragments held by the Brooklyn Museum in New York City and an 18 cm central section is missing. It is one of the two well-known Mathematical Papyri along with the Moscow Mathematical Papyrus. The Rhind Papyrus is larger than the Moscow Mathematical Papyrus, while the latter is older.

The I Ching dates from the 9th century BC in China.^{ [4] } The binary notation in the *I Ching* is used to interpret its quaternary divination technique.^{ [5] }

**Quaternary** is the base-4 numeral system. It uses the digits 0, 1, 2 and 3 to represent any real number.

Among the many forms of **divination** is a cleromancy method using the * I Ching* or

It is based on taoistic duality of yin and yang.^{ [6] } eight trigrams (Bagua) and a set of 64 hexagrams ("sixty-four" gua), analogous to the three-bit and six-bit binary numerals, were in use at least as early as the Zhou Dynasty of ancient China.^{ [4] }

The Song Dynasty scholar Shao Yong (1011–1077) rearranged the hexagrams in a format that resembles modern binary numbers, although he did not intend his arrangement to be used mathematically.^{ [5] } Viewing the least significant bit on top of single hexagrams in Shao Yong's square and reading along rows either from bottom right to top left with solid lines as 0 and broken lines as 1 or from top left to bottom right with solid lines as 1 and broken lines as 0 hexagrams can be interpreted as sequence from 0 to 63. ^{ [7] }

The Indian scholar Pingala (c. 2nd century BC) developed a binary system for describing prosody.^{ [8] }^{ [9] } He used binary numbers in the form of short and long syllables (the latter equal in length to two short syllables), making it similar to Morse code.^{ [10] }^{ [11] } They were known as *laghu* (light) and *guru* (heavy) syllables.

Pingala's Hindu classic titled Chandaḥśāstra (8.23) describes the formation of a matrix in order to give a unique value to each meter. "Chandaḥśāstra" literaly translates to *science of meters* in Sanskrit. The binary representations in Pingala's system increases towards the right, and not to the left like in the binary numbers of the modern positional notation.^{ [10] }^{ [12] } In Pingala's system, the numbers start from number one, and not zero. Four short syllables "0000" is the first pattern and corresponds to the value one. The numerical value is obtained by adding one to the sum of place values.^{ [13] }

The residents of the island of Mangareva in French Polynesia were using a hybrid binary-decimal system before 1450.^{ [14] } Slit drums with binary tones are used to encode messages across Africa and Asia.^{ [6] } Sets of binary combinations similar to the I Ching have also been used in traditional African divination systems such as Ifá as well as in medieval Western geomancy.

In the late 13th century Ramon Llull had the ambition to account for all wisdom in every branch of human knowledge of the time. For that purpose he developed a general method or ‘Ars generalis’ based on binary combinations of a number of simple basic principles or categories, for which he has been considered a predecessor of computing science and artificial intelligence.^{ [15] }

In 1605 Francis Bacon discussed a system whereby letters of the alphabet could be reduced to sequences of binary digits, which could then be encoded as scarcely visible variations in the font in any random text.^{ [16] } Importantly for the general theory of binary encoding, he added that this method could be used with any objects at all: "provided those objects be capable of a twofold difference only; as by Bells, by Trumpets, by Lights and Torches, by the report of Muskets, and any instruments of like nature".^{ [16] } (See Bacon's cipher.)

John Napier in 1617 described a system he called location arithmetic for doing binary calculations using a non-positional representation by letters. Thomas Harriot investigated several positional numbering systems, including binary, but did not publish his results; they were found later among his papers.^{ [17] } Possibly the first publication of the system in Europe was by Juan Caramuel y Lobkowitz, in 1700.^{ [18] }

Leibniz studied binary numbering in 1679; his work appears in his article *Explication de l'Arithmétique Binaire* (published in 1703) The full title of Leibniz's article is translated into English as the *"Explanation of Binary Arithmetic, which uses only the characters 1 and 0, with some remarks on its usefulness, and on the light it throws on the ancient Chinese figures of Fu Xi"*.^{ [19] } (1703). Leibniz's system uses 0 and 1, like the modern binary numeral system. An example of Leibniz's binary numeral system is as follows:^{ [19] }

- 0 0 0 1 numerical value 2
^{0} - 0 0 1 0 numerical value 2
^{1} - 0 1 0 0 numerical value 2
^{2} - 1 0 0 0 numerical value 2
^{3}

Leibniz interpreted the hexagrams of the I Ching as evidence of binary calculus.^{ [20] } As a Sinophile, Leibniz was aware of the I Ching, noted with fascination how its hexagrams correspond to the binary numbers from 0 to 111111, and concluded that this mapping was evidence of major Chinese accomplishments in the sort of philosophical mathematics he admired.^{ [21] } Leibniz was first introduced to the * I Ching * through his contact with the French Jesuit Joachim Bouvet, who visited China in 1685 as a missionary. Leibniz saw the *I Ching* hexagrams as an affirmation of the universality of his own religious beliefs as a Christian.^{ [20] } Binary numerals were central to Leibniz's theology. He believed that binary numbers were symbolic of the Christian idea of * creatio ex nihilo * or creation out of nothing.^{ [22] }

[A concept that] is not easy to impart to the pagans, is the creation

ex nihilothrough God's almighty power. Now one can say that nothing in the world can better present and demonstrate this power than the origin of numbers, as it is presented here through the simple and unadorned presentation of One and Zero or Nothing.

In 1854, British mathematician George Boole published a landmark paper detailing an algebraic system of logic that would become known as Boolean algebra. His logical calculus was to become instrumental in the design of digital electronic circuitry.^{ [23] }

In 1937, Claude Shannon produced his master's thesis at MIT that implemented Boolean algebra and binary arithmetic using electronic relays and switches for the first time in history. Entitled * A Symbolic Analysis of Relay and Switching Circuits *, Shannon's thesis essentially founded practical digital circuit design.^{ [24] }

In November 1937, George Stibitz, then working at Bell Labs, completed a relay-based computer he dubbed the "Model K" (for "**K**itchen", where he had assembled it), which calculated using binary addition.^{ [25] } Bell Labs authorized a full research program in late 1938 with Stibitz at the helm. Their Complex Number Computer, completed 8 January 1940, was able to calculate complex numbers. In a demonstration to the American Mathematical Society conference at Dartmouth College on 11 September 1940, Stibitz was able to send the Complex Number Calculator remote commands over telephone lines by a teletype. It was the first computing machine ever used remotely over a phone line. Some participants of the conference who witnessed the demonstration were John von Neumann, John Mauchly and Norbert Wiener, who wrote about it in his memoirs.^{ [26] }^{ [27] }^{ [28] }

The Z1 computer, which was designed and built by Konrad Zuse between 1935 and 1938, used Boolean logic and binary floating point numbers.^{ [29] }

Any number can be represented by a sequence of bits (binary digits), which in turn may be represented by any mechanism capable of being in two mutually exclusive states. Any of the following rows of symbols can be interpreted as the binary numeric value of 667:

1 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 1 | 1 |

| | ― | | | ― | ― | | | | | ― | | | | |

☒ | ☐ | ☒ | ☐ | ☐ | ☒ | ☒ | ☐ | ☒ | ☒ |

y | n | y | n | n | y | y | n | y | y |

The numeric value represented in each case is dependent upon the value assigned to each symbol. In the earlier days of computing, switches, punched holes and punched paper tapes were used to represent binary values.^{ [30] } In a modern computer, the numeric values may be represented by two different voltages; on a magnetic disk, magnetic polarities may be used. A "positive", "yes", or "on" state is not necessarily equivalent to the numerical value of one; it depends on the architecture in use.

In keeping with customary representation of numerals using Arabic numerals, binary numbers are commonly written using the symbols **0** and **1**. When written, binary numerals are often subscripted, prefixed or suffixed in order to indicate their base, or radix. The following notations are equivalent:

- 100101 binary (explicit statement of format)
- 100101b (a suffix indicating binary format; also known as Intel convention
^{ [31] }^{ [32] }) - 100101B (a suffix indicating binary format)
- bin 100101 (a prefix indicating binary format)
- 100101
_{2}(a subscript indicating base-2 (binary) notation) - %100101 (a prefix indicating binary format; also known as Motorola convention
^{ [31] }^{ [32] }) - 0b100101 (a prefix indicating binary format, common in programming languages)
- 6b100101 (a prefix indicating number of bits in binary format, common in programming languages)

When spoken, binary numerals are usually read digit-by-digit, in order to distinguish them from decimal numerals. For example, the binary numeral 100 is pronounced *one zero zero*, rather than *one hundred*, to make its binary nature explicit, and for purposes of correctness. Since the binary numeral 100 represents the value four, it would be confusing to refer to the numeral as *one hundred* (a word that represents a completely different value, or amount). Alternatively, the binary numeral 100 can be read out as "four" (the correct *value*), but this does not make its binary nature explicit.

Decimal number | Binary number |
---|---|

0 | 0 |

1 | 1 |

2 | 10 |

3 | 11 |

4 | 100 |

5 | 101 |

6 | 110 |

7 | 111 |

8 | 1000 |

9 | 1001 |

10 | 1010 |

11 | 1011 |

12 | 1100 |

13 | 1101 |

14 | 1110 |

15 | 1111 |

Counting in binary is similar to counting in any other number system. Beginning with a single digit, counting proceeds through each symbol, in increasing order. Before examining binary counting, it is useful to briefly discuss the more familiar decimal counting system as a frame of reference.

Decimal counting uses the ten symbols *0* through *9*. Counting begins with the incremental substitution of the least significant digit (rightmost digit) which is often called the *first digit*. When the available symbols for this position are exhausted, the least significant digit is reset to *0*, and the next digit of higher significance (one position to the left) is incremented (*overflow*), and incremental substitution of the low-order digit resumes. This method of reset and overflow is repeated for each digit of significance. Counting progresses as follows:

- 000, 001, 002, ... 007, 008, 009, (rightmost digit is reset to zero, and the digit to its left is incremented)
- 0
**1**0, 011, 012, ... - ...
- 090, 091, 092, ... 097, 098, 099, (rightmost two digits are reset to zeroes, and next digit is incremented)
**1**00, 101, 102, ...

Binary counting follows the same procedure, except that only the two symbols *0* and *1* are available. Thus, after a digit reaches 1 in binary, an increment resets it to 0 but also causes an increment of the next digit to the left:

- 0000,
- 000
**1**, (rightmost digit starts over, and next digit is incremented) - 00
**1**0, 0011, (rightmost two digits start over, and next digit is incremented) - 0
**1**00, 0101, 0110, 0111, (rightmost three digits start over, and the next digit is incremented) **1**000, 1001, 1010, 1011, 1100, 1101, 1110, 1111 ...

In the binary system, each digit represents an increasing power of 2, with the rightmost digit representing 2^{0}, the next representing 2^{1}, then 2^{2}, and so on. The value of a binary number is the sum of the powers of 2 represented by each "1" digit. For example, the binary number 100101 is converted to decimal form as follows:

- 100101
_{2}= [ (**1**) × 2^{5}] + [ (**0**) × 2^{4}] + [ (**0**) × 2^{3}] + [ (**1**) × 2^{2}] + [ (**0**) × 2^{1}] + [ (**1**) × 2^{0}]

- 100101
_{2}= [**1**× 32 ] + [**0**× 16 ] + [**0**× 8 ] + [**1**× 4 ] + [**0**× 2 ] + [**1**× 1 ]

**100101**_{2}= 37_{10}

Fractions in binary arithmetic terminate only if 2 is the only prime factor in the denominator. As a result, 1/10 does not have a finite binary representation (**10** has prime factors **2** and **5**). This causes 10 × 0.1 not to precisely equal 1 in floating-point arithmetic. As an example, to interpret the binary expression for 1/3 = .010101..., this means: 1/3 = 0 × **2 ^{−1}** + 1 ×

Fraction | Decimal | Binary | Fractional approximation |
---|---|---|---|

1/1 | 1 or 0.999... | 1 or 0.111... | 1/2 + 1/4 + 1/8... |

1/2 | 0.5 or 0.4999... | 0.1 or 0.0111... | 1/4 + 1/8 + 1/16 . . . |

1/3 | 0.333... | 0.010101... | 1/4 + 1/16 + 1/64 . . . |

1/4 | 0.25 or 0.24999... | 0.01 or 0.00111... | 1/8 + 1/16 + 1/32 . . . |

1/5 | 0.2 or 0.1999... | 0.00110011... | 1/8 + 1/16 + 1/128 . . . |

1/6 | 0.1666... | 0.0010101... | 1/8 + 1/32 + 1/128 . . . |

1/7 | 0.142857142857... | 0.001001... | 1/8 + 1/64 + 1/512 . . . |

1/8 | 0.125 or 0.124999... | 0.001 or 0.000111... | 1/16 + 1/32 + 1/64 . . . |

1/9 | 0.111... | 0.000111000111... | 1/16 + 1/32 + 1/64 . . . |

1/10 | 0.1 or 0.0999... | 0.000110011... | 1/16 + 1/32 + 1/256 . . . |

1/11 | 0.090909... | 0.00010111010001011101... | 1/16 + 1/64 + 1/128 . . . |

1/12 | 0.08333... | 0.00010101... | 1/16 + 1/64 + 1/256 . . . |

1/13 | 0.076923076923... | 0.000100111011000100111011... | 1/16 + 1/128 + 1/256 . . . |

1/14 | 0.0714285714285... | 0.0001001001... | 1/16 + 1/128 + 1/1024 . . . |

1/15 | 0.0666... | 0.00010001... | 1/16 + 1/256 . . . |

1/16 | 0.0625 or 0.0624999... | 0.0001 or 0.0000111... | 1/32 + 1/64 + 1/128 . . . |

Arithmetic in binary is much like arithmetic in other numeral systems. Addition, subtraction, multiplication, and division can be performed on binary numerals.

The simplest arithmetic operation in binary is addition. Adding two single-digit binary numbers is relatively simple, using a form of carrying:

- 0 + 0 → 0
- 0 + 1 → 1
- 1 + 0 → 1
- 1 + 1 → 0, carry 1 (since 1 + 1 = 2 = 0 + (1 × 2
^{1}) )

Adding two "1" digits produces a digit "0", while 1 will have to be added to the next column. This is similar to what happens in decimal when certain single-digit numbers are added together; if the result equals or exceeds the value of the radix (10), the digit to the left is incremented:

- 5 + 5 → 0, carry 1 (since 5 + 5 = 10 = 0 + (1 × 10
^{1}) ) - 7 + 9 → 6, carry 1 (since 7 + 9 = 16 = 6 + (1 × 10
^{1}) )

This is known as *carrying*. When the result of an addition exceeds the value of a digit, the procedure is to "carry" the excess amount divided by the radix (that is, 10/10) to the left, adding it to the next positional value. This is correct since the next position has a weight that is higher by a factor equal to the radix. Carrying works the same way in binary:

`1 1 1 1 1 (carried digits) 0 1 1 0 1 + 1 0 1 1 1 ------------- = 1 0 0 1 0 0 = 36`

In this example, two numerals are being added together: 01101_{2} (13_{10}) and 10111_{2} (23_{10}). The top row shows the carry bits used. Starting in the rightmost column, 1 + 1 = 10_{2}. The 1 is carried to the left, and the 0 is written at the bottom of the rightmost column. The second column from the right is added: 1 + 0 + 1 = 10_{2} again; the 1 is carried, and 0 is written at the bottom. The third column: 1 + 1 + 1 = 11_{2}. This time, a 1 is carried, and a 1 is written in the bottom row. Proceeding like this gives the final answer 100100_{2} (36 decimal).

When computers must add two numbers, the rule that: x xor y = (x + y) mod 2 for any two bits x and y allows for very fast calculation, as well.

A simplification for many binary addition problems is the Long Carry Method or Brookhouse Method of Binary Addition. This method is generally useful in any binary addition in which one of the numbers contains a long "string" of ones. It is based on the simple premise that under the binary system, when given a "string" of digits composed entirely of `n` ones (*where:*`n` is any integer length), adding 1 will result in the number 1 followed by a string of `n` zeros. That concept follows, logically, just as in the decimal system, where adding 1 to a string of `n` 9s will result in the number 1 followed by a string of `n` 0s:

Binary Decimal 1 1 1 1 1 likewise 9 9 9 9 9 + 1 + 1 ——————————— ——————————— 1 0 0 0 0 0 1 0 0 0 0 0

Such long strings are quite common in the binary system. From that one finds that large binary numbers can be added using two simple steps, without excessive carry operations. In the following example, two numerals are being added together: 1 1 1 0 1 1 1 1 1 0_{2} (958_{10}) and 1 0 1 0 1 1 0 0 1 1_{2} (691_{10}), using the traditional carry method on the left, and the long carry method on the right:

Traditional Carry Method Long Carry Method vs. 1 1 1 1 1 1 1 1 (carried digits) 1 ← 1 ← carry the 1 until it is one digit past the "string" below 1 1 1 0 1 1 1 1 1 0~~1 1 1~~0~~1 1 1 1 1~~0 cross out the "string", + 1 0 1 0 1 1 0 0 1 1 + 1 0~~1~~0 1 1 0 0~~1~~1 and cross out the digit that was added to it ——————————————————————— —————————————————————— = 1 1 0 0 1 1 1 0 0 0 1 1 1 0 0 1 1 1 0 0 0 1

The top row shows the carry bits used. Instead of the standard carry from one column to the next, the lowest-ordered "1" with a "1" in the corresponding place value beneath it may be added and a "1" may be carried to one digit past the end of the series. The "used" numbers must be crossed off, since they are already added. Other long strings may likewise be cancelled using the same technique. Then, simply add together any remaining digits normally. Proceeding in this manner gives the final answer of 1 1 0 0 1 1 1 0 0 0 1_{2} (1649_{10}). In our simple example using small numbers, the traditional carry method required eight carry operations, yet the long carry method required only two, representing a substantial reduction of effort.

0 | 1 | |
---|---|---|

0 | 0 | 1 |

1 | 1 | 10 |

The binary addition table is similar, but not the same, as the truth table of the logical disjunction operation . The difference is that , while .

Subtraction works in much the same way:

- 0 − 0 → 0
- 0 − 1 → 1, borrow 1
- 1 − 0 → 1
- 1 − 1 → 0

Subtracting a "1" digit from a "0" digit produces the digit "1", while 1 will have to be subtracted from the next column. This is known as *borrowing*. The principle is the same as for carrying. When the result of a subtraction is less than 0, the least possible value of a digit, the procedure is to "borrow" the deficit divided by the radix (that is, 10/10) from the left, subtracting it from the next positional value.

* * * * (starred columns are borrowed from) 1 1 0 1 1 1 0 − 1 0 1 1 1 ---------------- = 1 0 1 0 1 1 1

* (starred columns are borrowed from) 1 0 1 1 1 1 1 - 1 0 1 0 1 1 ---------------- = 0 1 1 0 1 0 0

Subtracting a positive number is equivalent to *adding* a negative number of equal absolute value. Computers use signed number representations to handle negative numbers—most commonly the two's complement notation. Such representations eliminate the need for a separate "subtract" operation. Using two's complement notation subtraction can be summarized by the following formula:

**A − B = A + not B + 1**

Multiplication in binary is similar to its decimal counterpart. Two numbers `A` and `B` can be multiplied by partial products: for each digit in `B`, the product of that digit in `A` is calculated and written on a new line, shifted leftward so that its rightmost digit lines up with the digit in `B` that was used. The sum of all these partial products gives the final result.

Since there are only two digits in binary, there are only two possible outcomes of each partial multiplication:

- If the digit in
`B`is 0, the partial product is also 0 - If the digit in
`B`is 1, the partial product is equal to`A`

For example, the binary numbers 1011 and 1010 are multiplied as follows:

1 0 1 1 (A) × 1 0 1 0 (B) --------- 0 0 0 0 ← Corresponds to the rightmost 'zero' inB+ 1 0 1 1 ← Corresponds to the next 'one' inB+ 0 0 0 0 + 1 0 1 1 --------------- = 1 1 0 1 1 1 0

Binary numbers can also be multiplied with bits after a binary point:

1 0 1 . 1 0 1A(5.625 in decimal) × 1 1 0 . 0 1B(6.25 in decimal) ------------------- 1 . 0 1 1 0 1 ← Corresponds to a 'one' inB+ 0 0 . 0 0 0 0 ← Corresponds to a 'zero' inB+ 0 0 0 . 0 0 0 + 1 0 1 1 . 0 1 + 1 0 1 1 0 . 1 --------------------------- = 1 0 0 0 1 1 . 0 0 1 0 1 (35.15625 in decimal)

See also Booth's multiplication algorithm.

0 | 1 | |
---|---|---|

0 | 0 | 0 |

1 | 0 | 1 |

The binary multiplication table is the same as the truth table of the logical conjunction operation .

Long division in binary is again similar to its decimal counterpart.

In the example below, the divisor is 101_{2}, or 5 in decimal, while the dividend is 11011_{2}, or 27 in decimal. The procedure is the same as that of decimal long division; here, the divisor 101_{2} goes into the first three digits 110_{2} of the dividend one time, so a "1" is written on the top line. This result is multiplied by the divisor, and subtracted from the first three digits of the dividend; the next digit (a "1") is included to obtain a new three-digit sequence:

1 ___________ 1 0 1 ) 1 1 0 1 1 − 1 0 1 ----- 0 0 1

The procedure is then repeated with the new sequence, continuing until the digits in the dividend have been exhausted:

1 0 1 ___________ 1 0 1 ) 1 1 0 1 1 − 1 0 1 ----- 1 1 1 − 1 0 1 ----- 1 0

Thus, the quotient of 11011_{2} divided by 101_{2} is 101_{2}, as shown on the top line, while the remainder, shown on the bottom line, is 10_{2}. In decimal, this corresponds to the fact that 27 divided by 5 is 5, with a remainder of 2.

Aside from long division, one can also devise the procedure so as to allow for over-subtracting from the partial remainder at each iteration, thereby leading to alternative methods which are less systematic, but more flexible as a result.^{ [33] }

The process of taking a binary square root digit by digit is the same as for a decimal square root and is explained here. An example is:

1 0 0 1 --------- √ 1010001 1 --------- 101 01 0 -------- 1001 100 0 -------- 10001 10001 10001 ------- 0

Though not directly related to the numerical interpretation of binary symbols, sequences of bits may be manipulated using Boolean logical operators. When a string of binary symbols is manipulated in this way, it is called a bitwise operation; the logical operators AND, OR, and XOR may be performed on corresponding bits in two binary numerals provided as input. The logical NOT operation may be performed on individual bits in a single binary numeral provided as input. Sometimes, such operations may be used as arithmetic short-cuts, and may have other computational benefits as well. For example, an arithmetic shift left of a binary number is the equivalent of multiplication by a (positive, integral) power of 2.

To convert from a base-10 integer to its base-2 (binary) equivalent, the number is divided by two. The remainder is the least-significant bit. The quotient is again divided by two; its remainder becomes the next least significant bit. This process repeats until a quotient of one is reached. The sequence of remainders (including the final quotient of one) forms the binary value, as each remainder must be either zero or one when dividing by two. For example, (357)_{10} is expressed as (101100101)_{2.}^{ [34] }

Conversion from base-2 to base-10 simply inverts the preceding algorithm. The bits of the binary number are used one by one, starting with the most significant (leftmost) bit. Beginning with the value 0, the prior value is doubled, and the next bit is then added to produce the next value. This can be organized in a multi-column table. For example, to convert 10010101101_{2} to decimal:

Prior value × 2 + Next bit Next value 0 × 2 + **1**= 1 1 × 2 + **0**= 2 2 × 2 + **0**= 4 4 × 2 + **1**= 9 9 × 2 + **0**= 18 18 × 2 + **1**= 37 37 × 2 + **0**= 74 74 × 2 + **1**= 149 149 × 2 + **1**= 299 299 × 2 + **0**= 598 598 × 2 + **1**= **1197**

The result is 1197_{10}. The first Prior Value of 0 is simply an initial decimal value. This method is an application of the Horner scheme.

Binary | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 1 | |
---|---|---|---|---|---|---|---|---|---|---|---|---|

Decimal | 1×2^{10} + | 0×2^{9} + | 0×2^{8} + | 1×2^{7} + | 0×2^{6} + | 1×2^{5} + | 0×2^{4} + | 1×2^{3} + | 1×2^{2} + | 0×2^{1} + | 1×2^{0} = | 1197 |

The fractional parts of a number are converted with similar methods. They are again based on the equivalence of shifting with doubling or halving.

In a fractional binary number such as 0.11010110101_{2}, the first digit is , the second , etc. So if there is a 1 in the first place after the decimal, then the number is at least , and vice versa. Double that number is at least 1. This suggests the algorithm: Repeatedly double the number to be converted, record if the result is at least 1, and then throw away the integer part.

For example, _{10}, in binary, is:

Converting Result 0. 0.0 0.01 0.010 0.0101

Thus the repeating decimal fraction 0.3... is equivalent to the repeating binary fraction 0.01... .

Or for example, 0.1_{10}, in binary, is:

Converting Result **0.1**0. 0.1 × 2 = **0.2**< 10.0 0.2 × 2 = **0.4**< 10.00 0.4 × 2 = **0.8**< 10.000 0.8 × 2 = **1.6**≥ 10.0001 0.6 × 2 = **1.2**≥ 10.00011 0.2 × 2 = **0.4**< 10.000110 0.4 × 2 = **0.8**< 10.0001100 0.8 × 2 = **1.6**≥ 10.00011001 0.6 × 2 = **1.2**≥ 10.000110011 0.2 × 2 = **0.4**< 10.0001100110

This is also a repeating binary fraction 0.00011... . It may come as a surprise that terminating decimal fractions can have repeating expansions in binary. It is for this reason that many are surprised to discover that 0.1 + ... + 0.1, (10 additions) differs from 1 in floating point arithmetic. In fact, the only binary fractions with terminating expansions are of the form of an integer divided by a power of 2, which 1/10 is not.

The final conversion is from binary to decimal fractions. The only difficulty arises with repeating fractions, but otherwise the method is to shift the fraction to an integer, convert it as above, and then divide by the appropriate power of two in the decimal base. For example:

Another way of converting from binary to decimal, often quicker for a person familiar with hexadecimal, is to do so indirectly—first converting ( in binary) into ( in hexadecimal) and then converting ( in hexadecimal) into ( in decimal).

For very large numbers, these simple methods are inefficient because they perform a large number of multiplications or divisions where one operand is very large. A simple divide-and-conquer algorithm is more effective asymptotically: given a binary number, it is divided by 10^{k}, where *k* is chosen so that the quotient roughly equals the remainder; then each of these pieces is converted to decimal and the two are concatenated. Given a decimal number, it can be split into two pieces of about the same size, each of which is converted to binary, whereupon the first converted piece is multiplied by 10^{k} and added to the second converted piece, where *k* is the number of decimal digits in the second, least-significant piece before conversion.

0_{hex} | = | 0_{dec} | = | 0_{oct} | 0 | 0 | 0 | 0 | |

1_{hex} | = | 1_{dec} | = | 1_{oct} | 0 | 0 | 0 | 1 | |

2_{hex} | = | 2_{dec} | = | 2_{oct} | 0 | 0 | 1 | 0 | |

3_{hex} | = | 3_{dec} | = | 3_{oct} | 0 | 0 | 1 | 1 | |

4_{hex} | = | 4_{dec} | = | 4_{oct} | 0 | 1 | 0 | 0 | |

5_{hex} | = | 5_{dec} | = | 5_{oct} | 0 | 1 | 0 | 1 | |

6_{hex} | = | 6_{dec} | = | 6_{oct} | 0 | 1 | 1 | 0 | |

7_{hex} | = | 7_{dec} | = | 7_{oct} | 0 | 1 | 1 | 1 | |

8_{hex} | = | 8_{dec} | = | 10_{oct} | 1 | 0 | 0 | 0 | |

9_{hex} | = | 9_{dec} | = | 11_{oct} | 1 | 0 | 0 | 1 | |

A_{hex} | = | 10_{dec} | = | 12_{oct} | 1 | 0 | 1 | 0 | |

B_{hex} | = | 11_{dec} | = | 13_{oct} | 1 | 0 | 1 | 1 | |

C_{hex} | = | 12_{dec} | = | 14_{oct} | 1 | 1 | 0 | 0 | |

D_{hex} | = | 13_{dec} | = | 15_{oct} | 1 | 1 | 0 | 1 | |

E_{hex} | = | 14_{dec} | = | 16_{oct} | 1 | 1 | 1 | 0 | |

F_{hex} | = | 15_{dec} | = | 17_{oct} | 1 | 1 | 1 | 1 |

Binary may be converted to and from hexadecimal more easily. This is because the radix of the hexadecimal system (16) is a power of the radix of the binary system (2). More specifically, 16 = 2^{4}, so it takes four digits of binary to represent one digit of hexadecimal, as shown in the adjacent table.

To convert a hexadecimal number into its binary equivalent, simply substitute the corresponding binary digits:

- 3A
_{16}= 0011 1010_{2} - E7
_{16}= 1110 0111_{2}

To convert a binary number into its hexadecimal equivalent, divide it into groups of four bits. If the number of bits isn't a multiple of four, simply insert extra **0** bits at the left (called padding). For example:

- 1010010
_{2}= 0101 0010 grouped with padding = 52_{16} - 11011101
_{2}= 1101 1101 grouped = DD_{16}

To convert a hexadecimal number into its decimal equivalent, multiply the decimal equivalent of each hexadecimal digit by the corresponding power of 16 and add the resulting values:

- C0E7
_{16}= (12 × 16^{3}) + (0 × 16^{2}) + (14 × 16^{1}) + (7 × 16^{0}) = (12 × 4096) + (0 × 256) + (14 × 16) + (7 × 1) = 49,383_{10}

Binary is also easily converted to the octal numeral system, since octal uses a radix of 8, which is a power of two (namely, 2^{3}, so it takes exactly three binary digits to represent an octal digit). The correspondence between octal and binary numerals is the same as for the first eight digits of hexadecimal in the table above. Binary 000 is equivalent to the octal digit 0, binary 111 is equivalent to octal 7, and so forth.

Octal Binary 0 000 1 001 2 010 3 011 4 100 5 101 6 110 7 111

Converting from octal to binary proceeds in the same fashion as it does for hexadecimal:

- 65
_{8}= 110 101_{2} - 17
_{8}= 001 111_{2}

And from binary to octal:

- 101100
_{2}= 101 100_{2}grouped = 54_{8} - 10011
_{2}= 010 011_{2}grouped with padding = 23_{8}

And from octal to decimal:

- 65
_{8}= (6 × 8^{1}) + (5 × 8^{0}) = (6 × 8) + (5 × 1) = 53_{10} - 127
_{8}= (1 × 8^{2}) + (2 × 8^{1}) + (7 × 8^{0}) = (1 × 64) + (2 × 8) + (7 × 1) = 87_{10}

Non-integers can be represented by using negative powers, which are set off from the other digits by means of a radix point (called a decimal point in the decimal system). For example, the binary number 11.01_{2} means:

**1**× 2^{1}(1 × 2 = **2**)plus **1**× 2^{0}(1 × 1 = **1**)plus **0**× 2^{−1}(0 × ^{1}⁄_{2}=**0**)plus **1**× 2^{−2}(1 × ^{1}⁄_{4}=**0.25**)

For a total of 3.25 decimal.

All dyadic rational numbers have a *terminating* binary numeral—the binary representation has a finite number of terms after the radix point. Other rational numbers have binary representation, but instead of terminating, they *recur*, with a finite sequence of digits repeating indefinitely. For instance

The phenomenon that the binary representation of any rational is either terminating or recurring also occurs in other radix-based numeral systems. See, for instance, the explanation in decimal. Another similarity is the existence of alternative representations for any terminating representation, relying on the fact that 0.111111... is the sum of the geometric series 2^{−1} + 2^{−2} + 2^{−3} + ... which is 1.

Binary numerals which neither terminate nor recur represent irrational numbers. For instance,

- 0.10100100010000100000100... does have a pattern, but it is not a fixed-length recurring pattern, so the number is irrational
- 1.0110101000001001111001100110011111110... is the binary representation of , the square root of 2, another irrational. It has no discernible pattern.

The **decimal** numeral system is the standard system for denoting integer and non-integer numbers. It is the extension to non-integer numbers of the Hindu–Arabic numeral system. The way of denoting numbers in the decimal system is often referred to as *decimal notation*.

In computing, **floating-point arithmetic** (**FP**) is arithmetic using formulaic representation of real numbers as an approximation so as to support a trade-off between range and precision. For this reason, floating-point computation is often found in systems which include very small and very large real numbers, which require fast processing times. A number is, in general, represented approximately to a fixed number of significant digits and scaled using an exponent in some fixed base; the base for the scaling is normally two, ten, or sixteen. A number that can be represented exactly is of the following form:

In mathematics and computing, **hexadecimal** is a positional system that represents numbers using a base of 16. Unlike the common way of representing numbers with ten symbols, it uses sixteen distinct symbols, most often the symbols "0"–"9" to represent values zero to nine, and "A"–"F" to represent values ten to fifteen.

A **numeral system** is a writing system for expressing numbers; that is, a mathematical notation for representing numbers of a given set, using digits or other symbols in a consistent manner.

The **octal** numeral system, or **oct** for short, is the base-8 number system, and uses the digits 0 to 7. Octal numerals can be made from binary numerals by grouping consecutive binary digits into groups of three. For example, the binary representation for decimal 74 is 1001010. Two zeroes can be added at the left: (00)1 001 010, corresponding the octal digits 1 1 2, yielding the octal representation 112.

A **computer number format** is the internal representation of numeric values in digital computer and calculator hardware and software. Normally, numeric values are stored as groupings of bits, named for the number of bits that compose them. The encoding between numerical values and bit patterns is chosen for convenience of the operation of the computer; the bit format used by the computer's instruction set generally requires conversion for external use such as printing and display. Different types of processors may have different internal representations of numerical values. Different conventions are used for integer and real numbers. Most calculations are carried out with number formats that fit into a processor register, but some software systems allow representation of arbitrarily large numbers using multiple words of memory.

The **ternary** numeral system has three as its base. Analogous to a bit, a ternary digit is a **trit**. One trit is equivalent to log_{2} 3 bits of information.

A **binary code** represents text, computer processor instructions, or any other data using a two-symbol system. The two-symbol system used is often "0" and "1" from the binary number system. The binary code assigns a pattern of binary digits, also known as bits, to each character, instruction, etc. For example, a binary string of eight bits can represent any of 256 possible values and can, therefore, represent a wide variety of different items.

A **numerical digit** is a single symbol used alone, or in combinations, to represent numbers according to some positional numeral systems. The single digits and their combinations are the numerals of the numeral system they belong to. The name "digit" comes from the fact that the ten digits of the hands correspond to the ten symbols of the common base 10 numeral system, i.e. the decimal digits.

In mathematics and computing, the **method of complements** is a technique to encode a symmetric range of positive and negative integers in a way that they can use the same algorithm (hardware) for addition throughout the whole range. For a given number of places half of the possible representations of numbers encode the positive numbers, the other half represents their respective additive inverses. The pairs of mutually additive inverse numbers are called *complements*. Thus subtraction of any number is implemented by adding its complement. Changing the sign of any number is encoded by generating its complement, which can be done by a very simple and efficient algorithm. This method was commonly used in mechanical calculators and is still used in modern computers. The generalized concept of the *radix complement* is also valuable in number theory, such as in Midy's theorem.

The **IEEE Standard for Floating-Point Arithmetic** is a technical standard for floating-point arithmetic established in 1985 by the Institute of Electrical and Electronics Engineers (IEEE). The standard addressed many problems found in the diverse floating-point implementations that made them difficult to use reliably and portably. Many hardware floating-point units use the IEEE 754 standard.

IBM System/360 computers, and subsequent machines based on that architecture (mainframes), support a **hexadecimal floating-point** format (**HFP**).

In computing, **signed number representations** are required to encode negative numbers in binary number systems.

**Finger binary** is a system for counting and displaying binary numbers on the fingers of one or more hands. It is possible to count from 0 to 31 (2^{5} − 1) using the fingers of a single hand, from 0 through 1023 (2^{10} − 1) if both hands are used, or from 0 to 1,048,575 (2^{20} − 1) if the toes on both feet are used as well.

**Non-standard positional numeral systems** here designates numeral systems that may loosely be described as positional systems, but that do not entirely comply with the following description of standard positional systems:

A **negative base** may be used to construct a non-standard positional numeral system. Like other place-value systems, each position holds multiples of the appropriate power of the system's base; but that base is negative—that is to say, the base b is equal to −r for some natural number r.

**Single-precision floating-point format ** is a computer number format, usually occupying 32 bits in computer memory; it represents a wide dynamic range of numeric values by using a floating radix point.

- ↑ Robson, Eleanor; Stedall, Jacqueline, eds. (2009), "Myth No. 2: the Horus eye fractions",
*The Oxford Handbook of the History of Mathematics*, Oxford University Press, p. 790, ISBN 9780199213122 - ↑ Chrisomalis, Stephen (2010),
*Numerical Notation: A Comparative History*, Cambridge University Press, pp. 42–43, ISBN 9780521878180 . - ↑ Rudman, Peter Strom (2007),
*How Mathematics Happened: The First 50,000 Years*, Prometheus Books, pp. 135–136, ISBN 9781615921768 . - 1 2 Edward Hacker; Steve Moore; Lorraine Patsco (2002).
*I Ching: An Annotated Bibliography*. Routledge. p. 13. ISBN 978-0-415-93969-0. - 1 2 Redmond & Hon (2014), p. 227.
- 1 2 Jonathan Shectman (2003).
*Groundbreaking Scientific Experiments, Inventions, and Discoveries of the 18th Century*. Greenwood Publishing. p. 29. ISBN 978-0-313-32015-6. - ↑ Zhonglian, Shi; Wenzhao, Li; Poser, Hans (2000).
*Leibniz’ Binary System and Shao Yong’s ”Xiantian Tu‘‘ in :Das Neueste über China: G.W. Leibnizens Novissima Sinica von 1697 : Internationales Symposium, Berlin 4. bis 7. Oktober 1997*. Stuttgart: Franz Steiner Verlag. pp. 165–170. ISBN 3515074481. - ↑ Sanchez, Julio; Canton, Maria P. (2007).
*Microcontroller programming: the microchip PIC*. Boca Raton, Florida: CRC Press. p. 37. ISBN 0-8493-7189-9. - ↑ W. S. Anglin and J. Lambek,
*The Heritage of Thales*, Springer, 1995, ISBN 0-387-94544-X - 1 2 Binary Numbers in Ancient India
- ↑ Math for Poets and Drummers (pdf, 145KB)
- ↑ Stakhov, Alexey; Olsen, Scott Anthony (2009).
*The mathematics of harmony: from Euclid to contemporary mathematics and computer science*. ISBN 978-981-277-582-5. - ↑ B. van Nooten, "Binary Numbers in Indian Antiquity", Journal of Indian Studies, Volume 21, 1993, pp. 31-50
- ↑ Bender, Andrea; Beller, Sieghard (16 December 2013). "Mangarevan invention of binary steps for easier calculation".
*Proceedings of the National Academy of Sciences*.**111**: 1322–1327. doi:10.1073/pnas.1309160110. PMC 3910603 . PMID 24344278. - ↑ (see Bonner 2007 , Fidora et al. 2011 )
- 1 2 Bacon, Francis (1605). "The Advancement of Learning". London. pp. Chapter 1.
- ↑ Shirley, John W. (1951). "Binary numeration before Leibniz".
*American Journal of Physics*.**19**(8): 452–454. doi:10.1119/1.1933042. - ↑ Ineichen, R. (2008). "Leibniz, Caramuel, Harriot und das Dualsystem" (PDF).
*Mitteilungen der deutschen Mathematiker-Vereinigung*(in German).**16**(1): 12–15. - 1 2 Leibniz G., Explication de l'Arithmétique Binaire, Die Mathematische Schriften, ed. C. Gerhardt, Berlin 1879, vol.7, p.223; Engl. transl.
- 1 2 3 J.E.H. Smith (2008).
*Leibniz: What Kind of Rationalist?: What Kind of Rationalist?*. Springer. p. 415. ISBN 978-1-4020-8668-7. - ↑ Aiton, Eric J. (1985).
*Leibniz: A Biography*. Taylor & Francis. pp. 245–8. ISBN 0-85274-470-6. - ↑ Yuen-Ting Lai (1998).
*Leibniz, Mysticism and Religion*. Springer. pp. 149–150. ISBN 978-0-7923-5223-5. - ↑ Boole, George (2009) [1854].
*An Investigation of the Laws of Thought on Which are Founded the Mathematical Theories of Logic and Probabilities*(Macmillan, Dover Publications, reprinted with corrections [1958] ed.). New York: Cambridge University Press. ISBN 978-1-108-00153-3. - ↑ Shannon, Claude Elwood (1940).
*A symbolic analysis of relay and switching circuits*. Cambridge: Massachusetts Institute of Technology. - ↑ "National Inventors Hall of Fame – George R. Stibitz". 20 August 2008. Archived from the original on 9 July 2010. Retrieved 5 July 2010.
- ↑ "George Stibitz : Bio". Math & Computer Science Department, Denison University. 30 April 2004. Retrieved 5 July 2010.
- ↑ "Pioneers – The people and ideas that made a difference – George Stibitz (1904–1995)". Kerry Redshaw. 20 February 2006. Retrieved 5 July 2010.
- ↑ "George Robert Stibitz – Obituary". Computer History Association of California. 6 February 1995. Retrieved 5 July 2010.
- ↑ "Konrad Zuse's Legacy: The Architecture of the Z1 and Z3" (PDF).
*IEEE Annals of the History of Computing*.**19**(2): 5–15. 1997. doi:10.1109/85.586067. - ↑ "Introducing binary - Revision 1 - GCSE Computer Science".
*BBC*. Retrieved 26 June 2019. - 1 2 Küveler, Gerd; Schwoch, Dietrich (2013) [1996].
*Arbeitsbuch Informatik - eine praxisorientierte Einführung in die Datenverarbeitung mit Projektaufgabe*(in German). Vieweg-Verlag, reprint: Springer-Verlag. doi:10.1007/978-3-322-92907-5. ISBN 978-3-528-04952-2. 9783322929075. Retrieved 5 August 2015. - 1 2 Küveler, Gerd; Schwoch, Dietrich (4 October 2007).
*Informatik für Ingenieure und Naturwissenschaftler: PC- und Mikrocomputertechnik, Rechnernetze*(in German).**2**(5 ed.). Vieweg, reprint: Springer-Verlag. ISBN 3834891916. 9783834891914. Retrieved 5 August 2015. - ↑ "The Definitive Higher Math Guide to Long Division and Its Variants — for Integers".
*Math Vault*. 24 February 2019. Retrieved 26 June 2019. - ↑ "Base System" . Retrieved 31 August 2016.

- Sanchez, Julio; Canton, Maria P. (2007).
*Microcontroller programming: the microchip PIC*. Boca Raton, FL: CRC Press. p. 37. ISBN 0-8493-7189-9. - Redmond, Geoffrey; Hon, Tze-Ki (2014).
*Teaching the I Ching*. Oxford University Press. ISBN 0-19-976681-9.

Wikimedia Commons has media related to . Binary numeral system |

Wikibooks has a book on the topic of: Fractals/Mathematics/binary |

- Binary System at cut-the-knot
- Conversion of Fractions at cut-the-knot
- Sir Francis Bacon's BiLiteral Cypher system, predates binary number system.

This page is based on this Wikipedia article

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.