Computer architecture bit widths |
---|
Bit |
Application |
Binary floating-point precision |
Decimal floating-point precision |
This article includes a list of general references, but it lacks sufficient corresponding inline citations .(July 2023) |
In computing, a word is the natural unit of data used by a particular processor design. A word is a fixed-sized datum handled as a unit by the instruction set or the hardware of the processor. The number of bits or digits [a] in a word (the word size, word width, or word length) is an important characteristic of any specific processor design or computer architecture.
The size of a word is reflected in many aspects of a computer's structure and operation; the majority of the registers in a processor are usually word-sized and the largest datum that can be transferred to and from the working memory in a single operation is a word in many (not all) architectures. The largest possible address size, used to designate a location in memory, is typically a hardware word (here, "hardware word" means the full-sized natural word of the processor, as opposed to any other definition used).
Documentation for older computers with fixed word size commonly states memory sizes in words rather than bytes or characters. The documentation sometimes uses metric prefixes correctly, sometimes with rounding, e.g., 65 kilowords (kW) meaning for 65536 words, and sometimes uses them incorrectly, with kilowords (kW) meaning 1024 words (210) and megawords (MW) meaning 1,048,576 words (220). With standardization on 8-bit bytes and byte addressability, stating memory sizes in bytes, kilobytes, and megabytes with powers of 1024 rather than 1000 has become the norm, although there is some use of the IEC binary prefixes.
Several of the earliest computers (and a few modern as well) use binary-coded decimal rather than plain binary, typically having a word size of 10 or 12 decimal digits, and some early decimal computers have no fixed word length at all. Early binary systems tended to use word lengths that were some multiple of 6-bits, with the 36-bit word being especially common on mainframe computers. The introduction of ASCII led to the move to systems with word lengths that were a multiple of 8-bits, with 16-bit machines being popular in the 1970s before the move to modern processors with 32 or 64 bits. [1] Special-purpose designs like digital signal processors, may have any word length from 4 to 80 bits. [1]
The size of a word can sometimes differ from the expected due to backward compatibility with earlier computers. If multiple compatible variations or a family of processors share a common architecture and instruction set but differ in their word sizes, their documentation and software may become notationally complex to accommodate the difference (see Size families below).
Depending on how a computer is organized, word-size units may be used for:
When a computer architecture is designed, the choice of a word size is of substantial importance. There are design considerations which encourage particular bit-group sizes for particular uses (e.g. for addresses), and these considerations point to different sizes for different uses. However, considerations of economy in design strongly push for one size, or a very few sizes related by multiples or fractions (submultiples) to a primary size. That preferred size becomes the word size of the architecture.
Character size was in the past (pre-variable-sized character encoding) one of the influences on unit of address resolution and the choice of word size. Before the mid-1960s, characters were most often stored in six bits; this allowed no more than 64 characters, so the alphabet was limited to upper case. Since it is efficient in time and space to have the word size be a multiple of the character size, word sizes in this period were usually multiples of 6 bits (in binary machines). A common choice then was the 36-bit word, which is also a good size for the numeric properties of a floating point format.
After the introduction of the IBM System/360 design, which uses eight-bit characters and supports lower-case letters, the standard size of a character (or more accurately, a byte) becomes eight bits. Word sizes thereafter are naturally multiples of eight bits, with 16, 32, and 64 bits being commonly used.
Early machine designs included some that used what is often termed a variable word length. In this type of organization, an operand has no fixed length. Depending on the machine and the instruction, the length might be denoted by a count field, by a delimiting character, or by an additional bit called, e.g., flag, or word mark. Such machines often use binary-coded decimal in 4-bit digits, or in 6-bit characters, for numbers. This class of machines includes the IBM 702, IBM 705, IBM 7080, IBM 7010, UNIVAC 1050, IBM 1401, IBM 1620, and RCA 301.
Most of these machines work on one unit of memory at a time and since each instruction or datum is several units long, each instruction takes several cycles just to access memory. These machines are often quite slow because of this. For example, instruction fetches on an IBM 1620 Model I take 8 cycles (160 μs) just to read the 12 digits of the instruction (the Model II reduced this to 6 cycles, or 4 cycles if the instruction did not need both address fields). Instruction execution takes a variable number of cycles, depending on the size of the operands.
The memory model of an architecture is strongly influenced by the word size. In particular, the resolution of a memory address, that is, the smallest unit that can be designated by an address, has often been chosen to be the word. In this approach, the word-addressable machine approach, address values which differ by one designate adjacent memory words. This is natural in machines which deal almost always in word (or multiple-word) units, and has the advantage of allowing instructions to use minimally sized fields to contain addresses, which can permit a smaller instruction size or a larger variety of instructions.
When byte processing is to be a significant part of the workload, it is usually more advantageous to use the byte, rather than the word, as the unit of address resolution. Address values which differ by one designate adjacent bytes in memory. This allows an arbitrary character within a character string to be addressed straightforwardly. A word can still be addressed, but the address to be used requires a few more bits than the word-resolution alternative. The word size needs to be an integer multiple of the character size in this organization. This addressing approach was used in the IBM 360, and has been the most common approach in machines designed since then.
When the workload involves processing fields of different sizes, it can be advantageous to address to the bit. Machines with bit addressing may have some instructions that use a programmer-defined byte size and other instructions that operate on fixed data sizes. As an example, on the IBM 7030 [4] ("Stretch"), a floating point instruction can only address words while an integer arithmetic instruction can specify a field length of 1-64 bits, a byte size of 1-8 bits and an accumulator offset of 0-127 bits.
In a byte-addressable machine with storage-to-storage (SS) instructions, there are typically move instructions to copy one or multiple bytes from one arbitrary location to another. In a byte-oriented (byte-addressable) machine without SS instructions, moving a single byte from one arbitrary location to another is typically:
Individual bytes can be accessed on a word-oriented machine in one of two ways. Bytes can be manipulated by a combination of shift and mask operations in registers. Moving a single byte from one arbitrary location to another may require the equivalent of the following:
Alternatively many word-oriented machines implement byte operations with instructions using special byte pointers in registers or memory. For example, the PDP-10 byte pointer contained the size of the byte in bits (allowing different-sized bytes to be accessed), the bit position of the byte within the word, and the word address of the data. Instructions could automatically adjust the pointer to the next byte on, for example, load and deposit (store) operations.
Different amounts of memory are used to store data values with different degrees of precision. The commonly used sizes are usually a power of two multiple of the unit of address resolution (byte or word). Converting the index of an item in an array into the memory address offset of the item then requires only a shift operation rather than a multiplication. In some cases this relationship can also avoid the use of division operations. As a result, most modern computer designs have word sizes (and other operand sizes) that are a power of two times the size of a byte.
As computer designs have grown more complex, the central importance of a single word size to an architecture has decreased. Although more capable hardware can use a wider variety of sizes of data, market forces exert pressure to maintain backward compatibility while extending processor capability. As a result, what might have been the central word size in a fresh design has to coexist as an alternative size to the original word size in a backward compatible design. The original word size remains available in future designs, forming the basis of a size family.
In the mid-1970s, DEC designed the VAX to be a 32-bit successor of the 16-bit PDP-11. They used word for a 16-bit quantity, while longword referred to a 32-bit quantity; this terminology is the same as the terminology used for the PDP-11. This was in contrast to earlier machines, where the natural unit of addressing memory would be called a word, while a quantity that is one half a word would be called a halfword. In fitting with this scheme, a VAX quadword is 64 bits. They continued this 16-bit word/32-bit longword/64-bit quadword terminology with the 64-bit Alpha.
Another example is the x86 family, of which processors of three different word lengths (16-bit, later 32- and 64-bit) have been released, while word continues to designate a 16-bit quantity. As software is routinely ported from one word-length to the next, some APIs and documentation define or refer to an older (and thus shorter) word-length than the full word length on the CPU that software may be compiled for. Also, similar to how bytes are used for small numbers in many programs, a shorter word (16 or 32 bits) may be used in contexts where the range of a wider word is not needed (especially where this can save considerable stack space or cache memory space). For example, Microsoft's Windows API maintains the programming language definition of WORD as 16 bits, despite the fact that the API may be used on a 32- or 64-bit x86 processor, where the standard word size would be 32 or 64 bits, respectively. Data structures containing such different sized words refer to them as:
A similar phenomenon has developed in Intel's x86 assembly language – because of the support for various sizes (and backward compatibility) in the instruction set, some instruction mnemonics carry "d" or "q" identifiers denoting "double-", "quad-" or "double-quad-", which are in terms of the architecture's original 16-bit word size.
An example with a different word size is the IBM System/360 family. In the System/360 architecture, System/370 architecture and System/390 architecture, there are 8-bit bytes, 16-bit halfwords, 32-bit words and 64-bit doublewords. The z/Architecture, which is the 64-bit member of that architecture family, continues to refer to 16-bit halfwords, 32-bit words, and 64-bit doublewords, and additionally features 128-bit quadwords.
In general, new processors must use the same data word lengths and virtual address widths as an older processor to have binary compatibility with that older processor.
Often carefully written source code – written with source-code compatibility and software portability in mind – can be recompiled to run on a variety of processors, even ones with different data word lengths or different address widths or both.
key: bit: bits, c: characters, d: decimal digits, w: word size of architecture, n: variable size, wm: word mark | |||||||
---|---|---|---|---|---|---|---|
Year | Computer architecture | Word size w | Integer sizes | Floating point sizes | Instruction sizes | Unit of address resolution | Char size |
1837 | Babbage Analytical engine | 50 d | w | — | Five different cards were used for different functions, exact size of cards not known. | w | — |
1941 | Zuse Z3 | 22 bit | — | w | 8 bit | w | — |
1942 | ABC | 50 bit | w | — | — | — | — |
1944 | Harvard Mark I | 23 d | w | — | 24 bit | — | — |
1946 (1948) {1953} | ENIAC (w/Panel #16 [5] ) {w/Panel #26 [6] } | 10 d | w, 2w (w) {w} | — | — (2 d, 4 d, 6 d, 8 d) {2 d, 4 d, 6 d, 8 d} | — — {w} | — |
1948 | Manchester Baby | 32 bit | w | — | w | w | — |
1951 | UNIVAC I | 12 d | w | — | 1⁄2w | w | 1 d |
1952 | IAS machine | 40 bit | w | — | 1⁄2w | w | 5 bit |
1952 | Fast Universal Digital Computer M-2 | 34 bit | w? | w | 34 bit = 4-bit opcode plus 3×10 bit address | 10 bit | — |
1952 | IBM 701 | 36 bit | 1⁄2w, w | — | 1⁄2w | 1⁄2w, w | 6 bit |
1952 | UNIVAC 60 | n d | 1 d, ... 10 d | — | — | — | 2 d, 3 d |
1952 | ARRA I | 30 bit | w | — | w | w | 5 bit |
1953 | IBM 702 | n c | 0 c, ... 511 c | — | 5 c | c | 6 bit |
1953 | UNIVAC 120 | n d | 1 d, ... 10 d | — | — | — | 2 d, 3 d |
1953 | ARRA II | 30 bit | w | 2w | 1⁄2w | w | 5 bit |
1954 (1955) | IBM 650 (w/IBM 653) | 10 d | w | — (w) | w | w | 2 d |
1954 | IBM 704 | 36 bit | w | w | w | w | 6 bit |
1954 | IBM 705 | n c | 0 c, ... 255 c | — | 5 c | c | 6 bit |
1954 | IBM NORC | 16 d | w | w, 2w | w | w | — |
1956 | IBM 305 | n d | 1 d, ... 100 d | — | 10 d | d | 1 d |
1956 | ARMAC | 34 bit | w | w | 1⁄2w | w | 5 bit, 6 bit |
1956 | LGP-30 | 31 bit | w | — | 16 bit | w | 6 bit |
1958 | UNIVAC II | 12 d | w | — | 1⁄2w | w | 1 d |
1958 | SAGE | 32 bit | 1⁄2w | — | w | w | 6 bit |
1958 | Autonetics Recomp II | 40 bit | w, 79 bit, 8 d, 15 d | 2w | 1⁄2w | 1⁄2w, w | 5 bit |
1958 | ZEBRA | 33 bit | w, 65 bit | 2w | w | w | 5 bit |
1958 | Setun | 6 trit (~9.5 bits) [c] | up to 6 tryte | up to 3 trytes | 4 trit? | ||
1958 | Electrologica X1 | 27 bit | w | 2w | w | w | 5 bit, 6 bit |
1959 | IBM 1401 | n c | 1 c, ... | — | 1 c, 2 c, 4 c, 5 c, 7 c, 8 c | c | 6 bit + wm |
1959 (TBD) | IBM 1620 | n d | 2 d, ... | — (4 d, ... 102 d) | 12 d | d | 2 d |
1960 | LARC | 12 d | w, 2w | w, 2w | w | w | 2 d |
1960 | CDC 1604 | 48 bit | w | w | 1⁄2w | w | 6 bit |
1960 | IBM 1410 | n c | 1 c, ... | — | 1 c, 2 c, 6 c, 7 c, 11 c, 12 c | c | 6 bit + wm |
1960 | IBM 7070 | 10 d [d] | w, 1-9 d | w | w | w, d | 2 d |
1960 | PDP-1 | 18 bit | w | — | w | w | 6 bit |
1960 | Elliott 803 | 39 bit | |||||
1961 | IBM 7030 (Stretch) | 64 bit | 1 bit, ... 64 bit, 1 d, ... 16 d | w | 1⁄2w, w | bit (integer), 1⁄2w (branch), w (float) | 1 bit, ... 8 bit |
1961 | IBM 7080 | n c | 0 c, ... 255 c | — | 5 c | c | 6 bit |
1962 | GE-6xx | 36 bit | w, 2 w | w, 2 w, 80 bit | w | w | 6 bit, 9 bit |
1962 | UNIVAC III | 25 bit | w, 2w, 3w, 4w, 6 d, 12 d | — | w | w | 6 bit |
1962 | Autonetics D-17B Minuteman I Guidance Computer | 27 bit | 11 bit, 24 bit | — | 24 bit | w | — |
1962 | UNIVAC 1107 | 36 bit | 1⁄6w, 1⁄3w, 1⁄2w, w | w | w | w | 6 bit |
1962 | IBM 7010 | n c | 1 c, ... | — | 1 c, 2 c, 6 c, 7 c, 11 c, 12 c | c | 6 b + wm |
1962 | IBM 7094 | 36 bit | w | w, 2w | w | w | 6 bit |
1962 | SDS 9 Series | 24 bit | w | 2w | w | w | |
1963 (1966) | Apollo Guidance Computer | 15 bit | w | — | w, 2w | w | — |
1963 | Saturn Launch Vehicle Digital Computer | 26 bit | w | — | 13 bit | w | — |
1964/ 1966 | PDP-6/PDP-10 | 36 bit | w | w, 2 w | w | w | 6 bit 7 bit (typical) 9 bit |
1964 | Titan | 48 bit | w | w | w | w | w |
1964 | CDC 6600 | 60 bit | w | w | 1⁄4w, 1⁄2w | w | 6 bit |
1964 | Autonetics D-37C Minuteman II Guidance Computer | 27 bit | 11 bit, 24 bit | — | 24 bit | w | 4 bit, 5 bit |
1965 | Gemini Guidance Computer | 39 bit | 26 bit | — | 13 bit | 13 bit, 26 | —bit |
1965 | IBM 1130 | 16 bit | w, 2w | 2w, 3w | w, 2w | w | 8 bit |
1965 | IBM System/360 | 32 bit | 1⁄2w, w, 1 d, ... 16 d | w, 2w | 1⁄2w, w, 11⁄2w | 8 bit | 8 bit |
1965 | UNIVAC 1108 | 36 bit | 1⁄6w, 1⁄4w, 1⁄3w, 1⁄2w, w, 2w | w, 2w | w | w | 6 bit, 9 bit |
1965 | PDP-8 | 12 bit | w | — | w | w | 8 bit |
1965 | Electrologica X8 | 27 bit | w | 2w | w | w | 6 bit, 7 bit |
1966 | SDS Sigma 7 | 32 bit | 1⁄2w, w | w, 2w | w | 8 bit | 8 bit |
1969 | Four-Phase Systems AL1 | 8 bit | w | — | ? | ? | ? |
1970 | MP944 | 20 bit | w | — | ? | ? | ? |
1970 | PDP-11 | 16 bit | w | 2w, 4w | w, 2w, 3w | 8 bit | 8 bit |
1971 | CDC STAR-100 | 64 bit | 1⁄2w, w | 1⁄2w, w | 1⁄2w, w | bit | 8 bit |
1971 | TMS1802NC | 4 bit | w | — | ? | ? | — |
1971 | Intel 4004 | 4 bit | w, d | — | 2w, 4w | w | — |
1972 | Intel 8008 | 8 bit | w, 2 d | — | w, 2w, 3w | w | 8 bit |
1972 | Calcomp 900 | 9 bit | w | — | w, 2w | w | 8 bit |
1974 | Intel 8080 | 8 bit | w, 2w, 2 d | — | w, 2w, 3w | w | 8 bit |
1975 | ILLIAC IV | 64 bit | w | w, 1⁄2w | w | w | — |
1975 | Motorola 6800 | 8 bit | w, 2 d | — | w, 2w, 3w | w | 8 bit |
1975 | MOS Tech. 6501 MOS Tech. 6502 | 8 bit | w, 2 d | — | w, 2w, 3w | w | 8 bit |
1976 | Cray-1 | 64 bit | 24 bit, w | w | 1⁄4w, 1⁄2w | w | 8 bit |
1976 | Zilog Z80 | 8 bit | w, 2w, 2 d | — | w, 2w, 3w, 4w, 5w | w | 8 bit |
1978 (1980) | 16-bit x86 (Intel 8086) (w/floating point: Intel 8087) | 16 bit | 1⁄2w, w, 2 d | — (2w, 4w, 5w, 17 d) | 1⁄2w, w, ... 7w | 8 bit | 8 bit |
1978 | VAX | 32 bit | 1⁄4w, 1⁄2w, w, 1 d, ... 31 d, 1 bit, ... 32 bit | w, 2w | 1⁄4w, ... 141⁄4w | 8 bit | 8 bit |
1979 (1984) | Motorola 68000 series (w/floating point) | 32 bit | 1⁄4w, 1⁄2w, w, 2 d | — (w, 2w, 21⁄2w) | 1⁄2w, w, ... 71⁄2w | 8 bit | 8 bit |
1985 | IA-32 (Intel 80386) (w/floating point) | 32 bit | 1⁄4w, 1⁄2w, w | — (w, 2w, 80 bit) | 8 bit, ... 120 bit 1⁄4w ... 33⁄4w | 8 bit | 8 bit |
1985 | ARMv1 | 32 bit | 1⁄4w, w | — | w | 8 bit | 8 bit |
1985 | MIPS I | 32 bit | 1⁄4w, 1⁄2w, w | w, 2w | w | 8 bit | 8 bit |
1991 | Cray C90 | 64 bit | 32 bit, w | w | 1⁄4w, 1⁄2w, 48 bit | w | 8 bit |
1992 | Alpha | 64 bit | 8 bit, 1⁄4w, 1⁄2w, w | 1⁄2w, w | 1⁄2w | 8 bit | 8 bit |
1992 | PowerPC | 32 bit | 1⁄4w, 1⁄2w, w | w, 2w | w | 8 bit | 8 bit |
1996 | ARMv4 (w/Thumb) | 32 bit | 1⁄4w, 1⁄2w, w | — | w (1⁄2w, w) | 8 bit | 8 bit |
2000 | IBM z/Architecture | 64 bit [e] | 8 bit, 1⁄4w, 1⁄2w, w 1 d, ... 31 d | 1⁄2w, w, 2w | 1⁄4w, 1⁄2w, 3⁄4w | 8 bit | 8 bit, UTF-16, UTF-32 |
2001 | IA-64 | 64 bit | 8 bit, 1⁄4w, 1⁄2w, w | 1⁄2w, w | 41 bit (in 128-bit bundles) [7] | 8 bit | 8 bit |
2001 | ARMv6 (w/VFP) | 32 bit | 8 bit, 1⁄2w, w | — (w, 2w) | 1⁄2w, w | 8 bit | 8 bit |
2003 | x86-64 | 64 bit | 8 bit, 1⁄4w, 1⁄2w, w | 1⁄2w, w, 80 bit | 8 bit, ... 120 bit | 8 bit | 8 bit |
2013 | ARMv8-A and ARMv9-A | 64 bit | 8 bit, 1⁄4w, 1⁄2w, w | 1⁄2w, w | 1⁄2w | 8 bit | 8 bit |
Year | Computer architecture | Word size w | Integer sizes | Floating point sizes | Instruction sizes | Unit of address resolution | Char size |
key: bit: bits, c: characters, d: decimal digits, w: word size of architecture, n: variable size, wm: word mark |
The bit is the most basic unit of information in computing and digital communication. The name is a portmanteau of binary digit. The bit represents a logical state with one of two possible values. These values are most commonly represented as either "1" or "0", but other representations such as true/false, yes/no, on/off, or +/− are also widely used.
The byte is a unit of digital information that most commonly consists of eight bits. 1 byte (B) = 8 bits (bit). Historically, the byte was the number of bits used to encode a single character of text in a computer and for this reason it is the smallest addressable unit of memory in many computer architectures. To disambiguate arbitrarily sized bytes from the common 8-bit definition, network protocol documents such as the Internet Protocol refer to an 8-bit byte as an octet. Those bits in an octet are usually counted with numbering from 0 to 7 or 7 to 0 depending on the bit endianness.
In computer science, an integer is a datum of integral data type, a data type that represents some range of mathematical integers. Integral data types may be of different sizes and may or may not be allowed to contain negative values. Integers are commonly represented in a computer as a group of binary digits (bits). The size of the grouping varies so the set of integer sizes available varies between different types of computers. Computer hardware nearly always provides a way to represent a processor register or memory address as an integer.
In computer programming, machine code is computer code consisting of machine language instructions, which are used to control a computer's central processing unit (CPU). For conventional binary computers, machine code is the binary representation of a computer program which is actually read and interpreted by the computer. A program in machine code consists of a sequence of machine instructions.
The IBM System/360 (S/360) is a family of mainframe computer systems announced by IBM on April 7, 1964, and delivered between 1965 and 1978. System/360 was the first family of computers designed to cover both commercial and scientific applications and a complete range of applications from small to large. The design distinguished between architecture and implementation, allowing IBM to release a suite of compatible designs at different prices. All but the only partially compatible Model 44 and the most expensive systems use microcode to implement the instruction set, featuring 8-bit byte addressing and fixed-point binary, fixed-point decimal and hexadecimal floating-point calculations. The System/360 family introduced IBM's Solid Logic Technology (SLT), which packed more transistors onto a circuit card, allowing more powerful but smaller computers.
In computing, endianness is the order in which bytes within a word of digital data are transmitted over a data communication medium or addressed in computer memory, counting only byte significance compared to earliness. Endianness is primarily expressed as big-endian (BE) or little-endian (LE), terms introduced by Danny Cohen into computer science for data ordering in an Internet Experiment Note published in 1980. The adjective endian has its origin in the writings of 18th century Anglo-Irish writer Jonathan Swift. In the 1726 novel Gulliver's Travels, he portrays the conflict between sects of Lilliputians divided into those breaking the shell of a boiled egg from the big end or from the little end. By analogy, a CPU may read a digital word big end first, or little end first.
In computer architecture, 8-bit integers or other data units are those that are 8 bits wide. Also, 8-bit central processing unit (CPU) and arithmetic logic unit (ALU) architectures are those that are based on registers or data buses of that size. Memory addresses for 8-bit CPUs are generally larger than 8-bit, usually 16-bit. 8-bit microcomputers are microcomputers that use 8-bit microprocessors.
In computing and telecommunications, a character is a unit of information that roughly corresponds to a grapheme, grapheme-like unit, or symbol, such as in an alphabet or syllabary in the written form of a natural language.
The IBM 700/7000 series is a series of large-scale (mainframe) computer systems that were made by IBM through the 1950s and early 1960s. The series includes several different, incompatible processor architectures. The 700s use vacuum-tube logic and were made obsolete by the introduction of the transistorized 7000s. The 7000s, in turn, were eventually replaced with System/360, which was announced in 1964. However the 360/65, the first 360 powerful enough to replace 7000s, did not become available until November 1965. Early problems with OS/360 and the high cost of converting software kept many 7000s in service for years afterward.
In computing, a memory address is a reference to a specific memory location in memory used by both software and hardware. These addresses are fixed-length sequences of digits, typically displayed and handled as unsigned integers. This numerical representation is based on the features of CPU, as well programming language constructs that treat the memory like an array.
In computer architecture, 36-bit integers, memory addresses, or other data units are those that are 36 bits wide. Also, 36-bit central processing unit (CPU) and arithmetic logic unit (ALU) architectures are those that are based on registers, address buses, or data buses of that size. 36-bit computers were popular in the early mainframe computer era from the 1950s through the early 1970s.
Byte addressing in hardware architectures supports accessing individual bytes. Computers with byte addressing are sometimes called byte machines, in contrast to word-addressable architectures, word machines, that access data by word.
In computer architecture, 128-bit integers, memory addresses, or other data units are those that are 128 bits wide. Also, 128-bit central processing unit (CPU) and arithmetic logic unit (ALU) architectures are those that are based on registers, address buses, or data buses of that size.
z/Architecture, initially and briefly called ESA Modal Extensions (ESAME), is IBM's 64-bit complex instruction set computer (CISC) instruction set architecture, implemented by its mainframe computers. IBM introduced its first z/Architecture-based system, the z900, in late 2000. Later z/Architecture systems include the IBM z800, z990, z890, System z9, System z10, zEnterprise 196, zEnterprise 114, zEC12, zBC12, z13, z14, z15 and z16.
In computer architecture, 12-bit integers, memory addresses, or other data units are those that are 12 bits wide. Also, 12-bit central processing unit (CPU) and arithmetic logic unit (ALU) architectures are those that are based on registers, address buses, or data buses of that size.
A decimal computer is a computer that represents and operates on numbers and addresses in decimal format – instead of binary as is common in most modern computers. Some decimal computers had a variable word length, which enabled operations on relatively large numbers.
This timeline of binary prefixes lists events in the history of the evolution, development, and use of units of measure that are germane to the definition of the binary prefixes by the International Electrotechnical Commission (IEC) in 1998, used primarily with units of information such as the bit and the byte.
An instruction set architecture (ISA) is an abstract model of a computer, also referred to as computer architecture. A realization of an ISA is called an implementation. An ISA permits multiple implementations that may vary in performance, physical size, and monetary cost ; because the ISA serves as the interface between software and hardware. Software that has been written for an ISA can run on different implementations of the same ISA. This has enabled binary compatibility between different generations of computers to be easily achieved, and the development of computer families. Both of these developments have helped to lower the cost of computers and to increase their applicability. For these reasons, the ISA is one of the most important abstractions in computing today.
In computer architecture, 16-bit integers, memory addresses, or other data units are those that are 16 bits wide. Also, 16-bit central processing unit (CPU) and arithmetic logic unit (ALU) architectures are those that are based on registers, address buses, or data buses of that size. 16-bit microcomputers are microcomputers that use 16-bit microprocessors.
The IBM System/360 architecture is the model independent architecture for the entire S/360 line of mainframe computers, including but not limited to the instruction set architecture. The elements of the architecture are documented in the IBM System/360 Principles of Operation and the IBM System/360 I/O Interface Channel to Control Unit Original Equipment Manufacturers' Information manuals.
[...] Internal data code is used: Quantitative (numerical) data are coded in a 4-bit decimal code; qualitative (alpha-numerical) data are coded in a 6-bit alphanumerical code. The internal instruction code means that the instructions are coded in straight binary code.
As to the internal information length, the information quantum is called a "catena," and it is composed of 24 bits representing either 6 decimal digits, or 4 alphanumerical characters. This quantum must contain a multiple of 4 and 6 bits to represent a whole number of decimal or alphanumeric characters. Twenty-four bits was found to be a good compromise between the minimum 12 bits, which would lead to a too-low transfer flow from a parallel readout core memory, and 36 bits or more, which was judged as too large an information quantum. The catena is to be considered as the equivalent of a character in variable word length machines, but it cannot be called so, as it may contain several characters. It is transferred in series to and from the main memory.
Not wanting to call a "quantum" a word, or a set of characters a letter, (a word is a word, and a quantum is something else), a new word was made, and it was called a "catena." It is an English word and exists in Webster's although it does not in French. Webster's definition of the word catena is, "a connected series;" therefore, a 24-bit information item. The word catena will be used hereafter.
The internal code, therefore, has been defined. Now what are the external data codes? These depend primarily upon the information handling device involved. The Gamma 60 is designed to handle information relevant to any binary coded structure. Thus an 80-column punched card is considered as a 960-bit information item; 12 rows multiplied by 80 columns equals 960 possible punches; is stored as an exact image in 960 magnetic cores of the main memory with 2 card columns occupying one catena. [...]
[...] Terms used here to describe the structure imposed by the machine design, in addition to bit , are listed below.
Byte denotes a group of bits used to encode a character, or the number of bits transmitted in parallel to and from input-output units. A term other than character is used here because a given character may be represented in different applications by more than one code, and different codes may use different numbers of bits (i.e., different byte sizes). In input-output transmission the grouping of bits may be completely arbitrary and have no relation to actual characters. (The term is coined from bite , but respelled to avoid accidental mutation to bit.)
A word consists of the number of data bits transmitted in parallel from or to memory in one memory cycle. Word size is thus defined as a structural property of the memory. (The term catena was coined for this purpose by the designers of the Bull GAMMA 60 computer.)
Block refers to the number of words transmitted to or from an input-output unit in response to a single input-output instruction. Block size is a structural property of an input-output unit; it may have been fixed by the design or left to be varied by the program. [...]
Three instructions are grouped together into 128-bit sized and aligned containers called bundles. Each bundle contains three 41-bit instruction slots and a 5-bit template field.