# Word (computer architecture)

Last updated

In computing, a word is the natural unit of data used by a particular processor design. A word is a fixed-sized datum handled as a unit by the instruction set or the hardware of the processor. The number of bits or digits [lower-alpha 1] in a word (the word size, word width, or word length) is an important characteristic of any specific processor design or computer architecture.

## Contents

The size of a word is reflected in many aspects of a computer's structure and operation; the majority of the registers in a processor are usually word sized and the largest datum that can be transferred to and from the working memory in a single operation is a word in many (not all) architectures. The largest possible address size, used to designate a location in memory, is typically a hardware word (here, "hardware word" means the full-sized natural word of the processor, as opposed to any other definition used).

Documentation for computers with fixed word size commonly stated memory sizes in words rather than bytes or characters. The documentation sometimes used metric prefixes correctly, sometimes with rounding, e.g., 65 kilowords (KW) meaning for 65536 words, and sometimes used them incorrectly, with kilowords (KW) meaning 1024 words (210) and megawords (MW) meaning 1,048,576 words (220). With standardization on 8-bit bytes and byte addressability, stating memory sizes in bytes, kilobytes, and megabytes with powers of 1024 rather than 1000 has become the norm, although there is some use of the IEC binary prefixes.

Several of the earliest computers (and a few modern as well) used binary-coded decimal rather than plain binary, typically having a word size of 10 or 12 decimal digits, and some early decimal computers had no fixed word length at all. Early binary systems tended to use word lengths that were some multiple of 6-bits, with the 36-bit word being especially common on mainframe computers. The introduction of ASCII led to the move to systems with word lengths that were a multiple of 8-bits, with 16-bit machines being popular in the 1970s before the move to modern processors with 32 or 64 bits. [1] Special-purpose designs like digital signal processors, may have any word length from 4 to 80 bits. [1]

The size of a word can sometimes differ from the expected due to backward compatibility with earlier computers. If multiple compatible variations or a family of processors share a common architecture and instruction set but differ in their word sizes, their documentation and software may become notationally complex to accommodate the difference (see Size families below).

## Uses of words

Depending on how a computer is organized, word-size units may be used for:

Fixed-point numbers
Holders for fixed point, usually integer, numerical values may be available in one or in several different sizes, but one of the sizes available will almost always be the word. The other sizes, if any, are likely to be multiples or fractions of the word size. The smaller sizes are normally used only for efficient use of memory; when loaded into the processor, their values usually go into a larger, word sized holder.
Floating-point numbers
Holders for floating-point numerical values are typically either a word or a multiple of a word.
Holders for memory addresses must be of a size capable of expressing the needed range of values but not be excessively large, so often the size used is the word though it can also be a multiple or fraction of the word size.
Registers
Processor registers are designed with a size appropriate for the type of data they hold, e.g. integers, floating-point numbers, or addresses. Many computer architectures use general-purpose registers that are capable of storing data in multiple representations.
Memory–processor transfer
When the processor reads from the memory subsystem into a register or writes a register's value to memory, the amount of data transferred is often a word. Historically, this amount of bits which could be transferred in one cycle was also called a catena in some environments (such as the Bull GAMMA 60  [ fr ]). [2] [3] In simple memory subsystems, the word is transferred over the memory data bus, which typically has a width of a word or half-word. In memory subsystems that use caches, the word-sized transfer is the one between the processor and the first level of cache; at lower levels of the memory hierarchy larger transfers (which are a multiple of the word size) are normally used.
Unit of address resolution
In a given architecture, successive address values designate successive units of memory; this unit is the unit of address resolution. In most computers, the unit is either a character (e.g. a byte) or a word. (A few computers have used bit resolution.) If the unit is a word, then a larger amount of memory can be accessed using an address of a given size at the cost of added complexity to access individual characters. On the other hand, if the unit is a byte, then individual characters can be addressed (i.e. selected during the memory operation).
Instructions
Machine instructions are normally the size of the architecture's word, such as in RISC architectures, or a multiple of the "char" size that is a fraction of it. This is a natural choice since instructions and data usually share the same memory subsystem. In Harvard architectures the word sizes of instructions and data need not be related, as instructions and data are stored in different memories; for example, the processor in the 1ESS electronic telephone switch had 37-bit instructions and 23-bit data words.

## Word size choice

When a computer architecture is designed, the choice of a word size is of substantial importance. There are design considerations which encourage particular bit-group sizes for particular uses (e.g. for addresses), and these considerations point to different sizes for different uses. However, considerations of economy in design strongly push for one size, or a very few sizes related by multiples or fractions (submultiples) to a primary size. That preferred size becomes the word size of the architecture.

Character size was in the past (pre-variable-sized character encoding) one of the influences on unit of address resolution and the choice of word size. Before the mid-1960s, characters were most often stored in six bits; this allowed no more than 64 characters, so the alphabet was limited to upper case. Since it is efficient in time and space to have the word size be a multiple of the character size, word sizes in this period were usually multiples of 6 bits (in binary machines). A common choice then was the 36-bit word, which is also a good size for the numeric properties of a floating point format.

After the introduction of the IBM System/360 design, which used eight-bit characters and supported lower-case letters, the standard size of a character (or more accurately, a byte) became eight bits. Word sizes thereafter were naturally multiples of eight bits, with 16, 32, and 64 bits being commonly used.

### Variable-word architectures

Early machine designs included some that used what is often termed a variable word length. In this type of organization, an operand had no fixed length. Depending on the machine and the instruction, the length might be denoted by a count field, by a delimiting character, or by an additional bit called, e.g., flag, word mark. Such machines often used binary-coded decimal in 4-bit digits, or in 6-bit characters, for numbers. This class of machines included the IBM 702, IBM 705, IBM 7080, IBM 7010, UNIVAC 1050, IBM 1401, IBM 1620, and RCA 301.

Most of these machines work on one unit of memory at a time and since each instruction or datum is several units long, each instruction takes several cycles just to access memory. These machines are often quite slow because of this. For example, instruction fetches on an IBM 1620 Model I take 8 cycles just to read the 12 digits of the instruction (the Model II reduced this to 6 cycles, or 4 cycles if the instruction did not need both address fields). Instruction execution took a completely variable number of cycles, depending on the size of the operands.

### Word, bit and byte addressing

The memory model of an architecture is strongly influenced by the word size. In particular, the resolution of a memory address, that is, the smallest unit that can be designated by an address, has often been chosen to be the word. In this approach, the word-addressable machine approach, address values which differ by one designate adjacent memory words. This is natural in machines which deal almost always in word (or multiple-word) units, and has the advantage of allowing instructions to use minimally sized fields to contain addresses, which can permit a smaller instruction size or a larger variety of instructions.

When byte processing is to be a significant part of the workload, it is usually more advantageous to use the byte, rather than the word, as the unit of address resolution. Address values which differ by one designate adjacent bytes in memory. This allows an arbitrary character within a character string to be addressed straightforwardly. A word can still be addressed, but the address to be used requires a few more bits than the word-resolution alternative. The word size needs to be an integer multiple of the character size in this organization. This addressing approach was used in the IBM 360, and has been the most common approach in machines designed since then.

When the workload involves processing fields of different sizes, it can be advantageous to address to the bit. Machines with bit addressing may have some instructions that use a programmer-defined byte size and other instructions that operate on fixed data sizes. As an example, on the IBM 7030 [4] ("Stretch"), a floating point instruction can only address words while an integer arithmetic instruction can specify a field length of 1-64 bits, a byte size of 1-8 bits and an accumulator offset of 0-127 bits.

In at byte-addressable machine with storage-to-storage (SS) instructions, there are typically move instructions to copy one or multiple bytes from one arbitrary location to another. In a byte-oriented (byte-addressable) machine without SS instructions, moving a single byte from one arbitrary location to another is typically:

1. LOAD the source byte
2. STORE the result back in the target byte

Individual bytes can be accessed on a word-oriented machine in one of two ways. Bytes can be manipulated by a combination of shift and mask operations in registers. Moving a single byte from one arbitrary location to another may require the equivalent of the following:

1. LOAD the word containing the source byte
2. SHIFT the source word to align the desired byte to the correct position in the target word
3. AND the source word with a mask to zero out all but the desired bits
4. LOAD the word containing the target byte
5. AND the target word with a mask to zero out the target byte
6. OR the registers containing the source and target words to insert the source byte
7. STORE the result back in the target location

Alternatively many word-oriented machines implement byte operations with instructions using special byte pointers in registers or memory. For example, the PDP-10 byte pointer contained the size of the byte in bits (allowing different-sized bytes to be accessed), the bit position of the byte within the word, and the word address of the data. Instructions could automatically adjust the pointer to the next byte on, for example, load and deposit (store) operations.

### Powers of two

Different amounts of memory are used to store data values with different degrees of precision. The commonly used sizes are usually a power of two multiple of the unit of address resolution (byte or word). Converting the index of an item in an array into the memory address offset of the item then requires only a shift operation rather than a multiplication. In some cases this relationship can also avoid the use of division operations. As a result, most modern computer designs have word sizes (and other operand sizes) that are a power of two times the size of a byte.

## Size families

As computer designs have grown more complex, the central importance of a single word size to an architecture has decreased. Although more capable hardware can use a wider variety of sizes of data, market forces exert pressure to maintain backward compatibility while extending processor capability. As a result, what might have been the central word size in a fresh design has to coexist as an alternative size to the original word size in a backward compatible design. The original word size remains available in future designs, forming the basis of a size family.

In the mid-1970s, DEC designed the VAX to be a 32-bit successor of the 16-bit PDP-11. They used word for a 16-bit quantity, while longword referred to a 32-bit quantity; this terminology is the same as the terminology used for the PDP-11. This was in contrast to earlier machines, where the natural unit of addressing memory would be called a word, while a quantity that is one half a word would be called a halfword. In fitting with this scheme, a VAX quadword is 64 bits. They continued this 16-bit word/32-bit longword/64-bit quadword terminology with the 64-bit Alpha.

Another example is the x86 family, of which processors of three different word lengths (16-bit, later 32- and 64-bit) have been released, while word continues to designate a 16-bit quantity. As software is routinely ported from one word-length to the next, some APIs and documentation define or refer to an older (and thus shorter) word-length than the full word length on the CPU that software may be compiled for. Also, similar to how bytes are used for small numbers in many programs, a shorter word (16 or 32 bits) may be used in contexts where the range of a wider word is not needed (especially where this can save considerable stack space or cache memory space). For example, Microsoft's Windows API maintains the programming language definition of WORD as 16 bits, despite the fact that the API may be used on a 32- or 64-bit x86 processor, where the standard word size would be 32 or 64 bits, respectively. Data structures containing such different sized words refer to them as:

• WORD (16 bits/2 bytes)
• DWORD (32 bits/4 bytes)
• QWORD (64 bits/8 bytes)

A similar phenomenon has developed in Intel's x86 assembly language – because of the support for various sizes (and backward compatibility) in the instruction set, some instruction mnemonics carry "d" or "q" identifiers denoting "double-", "quad-" or "double-quad-", which are in terms of the architecture's original 16-bit word size.

An example with a different word size is the IBM System/360 family. In the System/360 architecture, System/370 architecture and System/390 architecture, there are 8-bit bytes, 16-bit halfwords, 32-bit words and 64-bit doublewords. The z/Architecture, which is the 64-bit member of that architecture family, continues to refer to 16-bit halfwords, 32-bit words, and 64-bit doublewords, and additionally features 128-bit quadwords.

In general, new processors must use the same data word lengths and virtual address widths as an older processor to have binary compatibility with that older processor.

Often carefully written source code written with source-code compatibility and software portability in mind can be recompiled to run on a variety of processors, even ones with different data word lengths or different address widths or both.

## Table of word sizes

key: bit: bits, c: characters, d: decimal digits, w: word size of architecture, n: variable size, wm: Word mark
YearComputer
architecture
Word size wInteger
sizes
Floating­point
sizes
Instruction
sizes
resolution
Char size
1837 Babbage
Analytical engine
50 dwFive different cards were used for different functions, exact size of cards not known.w
1941 Zuse Z3 22 bitw8 bitw
1942 ABC 50 bitw
1944 Harvard Mark I 23 dw24 bit
1946
(1948)
{1953}
ENIAC
(w/Panel #16 [5] )
{w/Panel #26 [6] }
10 dw, 2w
(w)
{w}

(2 d, 4 d, 6 d, 8 d)
{2 d, 4 d, 6 d, 8 d}

{w}
1948 Manchester Baby 32 bitwww
1951 UNIVAC I 12 dw12ww1 d
1952 IAS machine 40 bitw12ww5 bit
1952 Fast Universal Digital Computer M-2 34 bitw?w34 bit = 4-bit opcode plus 3×10 bit address10 bit
1952 IBM 701 36 bit12w, w12w12w, w6 bit
1952 UNIVAC 60 n d1 d, ... 10 d2 d, 3 d
1952 ARRA I 30 bitwww5 bit
1953 IBM 702 n c0 c, ... 511 c5 cc6 bit
1953 UNIVAC 120 n d1 d, ... 10 d2 d, 3 d
1953 ARRA II 30 bitw2w12ww5 bit
1954
(1955)
IBM 650
(w/IBM 653)
10 dw
(w)
ww2 d
1954 IBM 704 36 bitwwww6 bit
1954 IBM 705 n c0 c, ... 255 c5 cc6 bit
1954 IBM NORC 16 dww, 2www
1956 IBM 305 n d1 d, ... 100 d10 dd1 d
1956 ARMAC 34 bitww12ww5 bit, 6 bit
1957 Autonetics Recomp I 40 bitw, 79 bit, 8 d, 15 d12w12w, w5 bit
1958 UNIVAC II 12 dw12ww1 d
1958 SAGE 32 bit12www6 bit
1958 Autonetics Recomp II 40 bitw, 79 bit, 8 d, 15 d2w12w12w, w5 bit
1958 Setun 6  trit (~9.5 bits) [lower-alpha 2] up to 6  tryte up to 3 trytes4 trit?
1958 Electrologica X1 27 bitw2www5 bit, 6 bit
1959 IBM 1401 n c1 c, ...1 c, 2 c, 4 c, 5 c, 7 c, 8 cc6 bit + wm
1959
(TBD)
IBM 1620 n d2 d, ...
(4 d, ... 102 d)
12 dd2 d
1960 LARC 12 dw, 2ww, 2www2 d
1960 CDC 1604 48 bitww12ww6 bit
1960 IBM 1410 n c1 c, ...1 c, 2 c, 6 c, 7 c, 11 c, 12 cc6 bit + wm
1960 IBM 7070 10 d [lower-alpha 3] w, 1-9 dwww, d2 d
1960 PDP-1 18 bitwww6 bit
1960 Elliott 803 39 bit
1961 IBM 7030
(Stretch)
64 bit1 bit, ... 64 bit,
1 d, ... 16 d
w12w, wbit (integer),
12w (branch),
w (float)
1 bit, ... 8 bit
1961 IBM 7080 n c0 c, ... 255 c5 cc6 bit
1962 GE-6xx 36 bitw, 2 ww, 2 w, 80 bitww6 bit, 9 bit
1962 UNIVAC III 25 bitw, 2w, 3w, 4w, 6 d, 12 dww6 bit
1962Autonetics D-17B
Minuteman I Guidance Computer
27 bit11 bit, 24 bit24 bitw
1962 UNIVAC 1107 36 bit16w, 13w, 12w, wwww6 bit
1962 IBM 7010 n c1 c, ...1 c, 2 c, 6 c, 7 c, 11 c, 12 cc6 b + wm
1962 IBM 7094 36 bitww, 2www6 bit
1962 SDS 9 Series 24 bitw2www
1963
(1966)
Apollo Guidance Computer 15 bitww, 2ww
1963 Saturn Launch Vehicle Digital Computer 26 bitw13 bitw
1964/1966 PDP-6/PDP-10 36 bitww, 2 www6 bit
7 bit (typical)
9 bit
1964 Titan 48 bitwwwww
1964 CDC 6600 60 bitww14w, 12ww6 bit
1964Autonetics D-37C
Minuteman II Guidance Computer
27 bit11 bit, 24 bit24 bitw4 bit, 5 bit
1965 Gemini Guidance Computer 39 bit26 bit13 bit13 bit, 26—bit
1965 IBM 1130 16 bitw, 2w2w, 3ww, 2ww8 bit
1965 IBM System/360 32 bit12w, w,
1 d, ... 16 d
w, 2w12w, w, 112w8 bit8 bit
1965 UNIVAC 1108 36 bit16w, 14w, 13w, 12w, w, 2ww, 2www6 bit, 9 bit
1965 PDP-8 12 bitwww8 bit
1965 Electrologica X8 27 bitw2www6 bit, 7 bit
1966 SDS Sigma 7 32 bit12w, ww, 2ww8 bit8 bit
1969 Four-Phase Systems AL1 8 bitw???
1970 MP944 20 bitw???
1970 PDP-11 16 bitw2w, 4ww, 2w, 3w8 bit8 bit
1971 CDC STAR-100 64 bit12w, w12w, w12w, wbit8 bit
1971 TMS1802NC 4 bitw??
1971 Intel 4004 4 bitw, d2w, 4ww
1972 Intel 8008 8 bitw, 2 dw, 2w, 3ww8 bit
1972 Calcomp 900 9 bitww, 2ww8 bit
1974 Intel 8080 8 bitw, 2w, 2 dw, 2w, 3ww8 bit
1975 ILLIAC IV 64 bitww, 12www
1975 Motorola 6800 8 bitw, 2 dw, 2w, 3ww8 bit
1975 MOS Tech. 6501
MOS Tech. 6502
8 bitw, 2 dw, 2w, 3ww8 bit
1976 Cray-1 64 bit24 bit, ww14w, 12ww8 bit
1976 Zilog Z80 8 bitw, 2w, 2 dw, 2w, 3w, 4w, 5ww8 bit
1978
(1980)
16-bit x86 (Intel 8086)
(w/floating point: Intel 8087)
16 bit12w, w, 2 d
(2w, 4w, 5w, 17 d)
12w, w, ... 7w8 bit8 bit
1978 VAX 32 bit14w, 12w, w, 1 d, ... 31 d, 1 bit, ... 32 bitw, 2w14w, ... 1414w8 bit8 bit
1979
(1984)
Motorola 68000 series
(w/floating point)
32 bit14w, 12w, w, 2 d
(w, 2w, 212w)
12w, w, ... 712w8 bit8 bit
1985 IA-32 (Intel 80386) (w/floating point)32 bit14w, 12w, w
(w, 2w, 80 bit)
8 bit, ... 120 bit
14w ... 334w
8 bit8 bit
1985 ARMv1 32 bit14w, ww8 bit8 bit
1985 MIPS I 32 bit14w, 12w, ww, 2ww8 bit8 bit
1991 Cray C90 64 bit32 bit, ww14w, 12w, 48 bitw8 bit
1992 Alpha 64 bit8 bit, 14w, 12w, w12w, w12w8 bit8 bit
1992 PowerPC 32 bit14w, 12w, ww, 2ww8 bit8 bit
1996 ARMv4
(w/Thumb)
32 bit14w, 12w, ww
(12w, w)
8 bit8 bit
2000 IBM z/Architecture
(w/vector facility)
64 bit14w, 12w, w
1 d, ... 31 d
12w, w, 2w14w, 12w, 34w8 bit8 bit, UTF-16, UTF-32
2001 IA-64 64 bit8 bit, 14w, 12w, w12w, w41 bit (in 128-bit bundles) [7] 8 bit8 bit
2001 ARMv6
(w/VFP)
32 bit8 bit, 12w, w
(w, 2w)
12w, w8 bit8 bit
2003 x86-64 64 bit8 bit, 14w, 12w, w12w, w, 80 bit8 bit, ... 120 bit8 bit8 bit
2013 ARMv8-A and ARMv9-A64 bit8 bit, 14w, 12w, w12w, w12w8 bit8 bit
YearComputer
architecture
Word size wInteger
sizes
Floating­point
sizes
Instruction
sizes
resolution
Char size
key: bit: bits, d: decimal digits, w: word size of architecture, n: variable size

## Notes

1. Many early computers were decimal, and a few were ternary
2. The bit equivalent is computed by taking the amount of information entropy provided by the trit, which is ${\displaystyle \log _{2}(3)}$. This gives an equivalent of about 9.51 bits for 6 trits.
3. Three-state sign

## Related Research Articles

The bit is the most basic unit of information in computing and digital communications. The name is a portmanteau of binary digit. The bit represents a logical state with one of two possible values. These values are most commonly represented as either "1" or "0", but other representations such as true/false, yes/no, +/, or on/off are also commonly used.

The byte is a unit of digital information that most commonly consists of eight bits. Historically, the byte was the number of bits used to encode a single character of text in a computer and for this reason it is the smallest addressable unit of memory in many computer architectures. To disambiguate arbitrarily sized bytes from the common 8-bit definition, network protocol documents such as The Internet Protocol refer to an 8-bit byte as an octet. Those bits in an octet are usually counted with numbering from 0 to 7 or 7 to 0 depending on the bit endianness. The first bit is number 0, making the eighth bit number 7.

In computer programming, machine code is any low-level programming language, consisting of machine language instructions, which are used to control a computer's central processing unit (CPU). Each instruction causes the CPU to perform a very specific task, such as a load, a store, a jump, or an arithmetic logic unit (ALU) operation on one or more units of data in the CPU's registers or memory.

The IBM System/360 (S/360) is a family of mainframe computer systems that was announced by IBM on April 7, 1964, and delivered between 1965 and 1978. It was the first family of computers designed to cover both commercial and scientific applications and to cover a complete range of applications from small to large. The design distinguished between architecture and implementation, allowing IBM to release a suite of compatible designs at different prices. All but the only partially compatible Model 44 and the most expensive systems use microcode to implement the instruction set, which features 8-bit byte addressing and binary, decimal, and hexadecimal floating-point calculations.

In computing, endianness is the order or sequence of bytes of a word of digital data in computer memory. Endianness is primarily expressed as big-endian (BE) or little-endian (LE). A big-endian system stores the most significant byte of a word at the smallest memory address and the least significant byte at the largest. A little-endian system, in contrast, stores the least-significant byte at the smallest address. Bi-endianness is a feature supported by numerous computer architectures that feature switchable endianness in data fetches and stores or for instruction fetches. Other orderings are generically called middle-endian or mixed-endian.

In computer architecture, 8-bit integers or other data units are those that are 8 bits wide. Also, 8-bit central processing unit (CPU) and arithmetic logic unit (ALU) architectures are those that are based on registers or data buses of that size. Memory addresses for 8-bit CPUs are generally larger than 8-bit, usually 16-bit, while they could in theory be 8-bit, and in some situations 8-bit addresses are also used with 16-bit addresses mainly used. '8-bit' is also a generation of microcomputers in which 8-bit microprocessors were the norm.

In computer science, an instruction set architecture (ISA), also called computer architecture, is an abstract model of a computer. A device that executes instructions described by that ISA, such as a central processing unit (CPU), is called an implementation.

A computer number format is the internal representation of numeric values in digital device hardware and software, such as in programmable computers and calculators. Numerical values are stored as groupings of bits, such as bytes and words. The encoding between numerical values and bit patterns is chosen for convenience of the operation of the computer; the encoding used by the computer's instruction set generally requires conversion for external use, such as for printing and display. Different types of processors may have different internal representations of numerical values and different conventions are used for integer and real numbers. Most calculations are carried out with number formats that fit into a processor register, but some software systems allow representation of arbitrarily large numbers using multiple words of memory.

In computer and machine-based telecommunications terminology, a character is a unit of information that roughly corresponds to a grapheme, grapheme-like unit, or symbol, such as in an alphabet or syllabary in the written form of a natural language.

The IBM 700/7000 series is a series of large-scale (mainframe) computer systems that were made by IBM through the 1950s and early 1960s. The series includes several different, incompatible processor architectures. The 700s use vacuum-tube logic and were made obsolete by the introduction of the transistorized 7000s. The 7000s, in turn, were eventually replaced with System/360, which was announced in 1964. However the 360/65, the first 360 powerful enough to replace 7000s, did not become available until November 1965. Early problems with OS/360 and the high cost of converting software kept many 7000s in service for years afterward.

In computing, a memory address is a reference to a specific memory location used at various levels by software and hardware. Memory addresses are fixed-length sequences of digits conventionally displayed and manipulated as unsigned integers. Such numerical semantic bases itself upon features of CPU, as well upon use of the memory like an array endorsed by various programming languages.

Addressing modes are an aspect of the instruction set architecture in most central processing unit (CPU) designs. The various addressing modes that are defined in a given instruction set architecture define how the machine language instructions in that architecture identify the operand(s) of each instruction. An addressing mode specifies how to calculate the effective memory address of an operand by using information held in registers and/or constants contained within a machine instruction or elsewhere.

In computer architecture, 36-bit integers, memory addresses, or other data units are those that are 36 bits wide. Also, 36-bit central processing unit (CPU) and arithmetic logic unit (ALU) architectures are those that are based on registers, address buses, or data buses of that size. 36-bit computers were popular in the early mainframe computer era from the 1950s through the early 1970s.

Byte addressing in hardware architectures supports accessing individual bytes. Computers with byte addressing are sometimes called byte machines, in contrast to word-addressable architectures, word machines, that access data by word.

Decimal computers are computers which can represent numbers and addresses in decimal as well as providing instructions to operate on those numbers and addresses directly in decimal, without conversion to a pure binary representation. Some also had a variable wordlength, which enabled operations on numbers with a large number of digits.

This timeline of binary prefixes lists events in the history of the evolution, development, and use of units of measure for information, the bit and the byte, which are germane to the definition of the binary prefixes by the International Electrotechnical Commission (IEC) in 1998.

An instruction set architecture (ISA) is an abstract model of a computer, also referred to as computer architecture. A realization of an ISA is called an implementation. An ISA permits multiple implementations that may vary in performance, physical size, and monetary cost ; because the ISA serves as the interface between software and hardware. Software that has been written for an ISA can run on different implementations of the same ISA. This has enabled binary compatibility between different generations of computers to be easily achieved, and the development of computer families. Both of these developments have helped to lower the cost of computers and to increase their applicability. For these reasons, the ISA is one of the most important abstractions in computing today.

Werner Buchholz was a German-American computer scientist. After growing up in Europe, Buchholz moved to Canada and then to the United States. He worked for International Business Machines (IBM) in New York. In June 1956, he coined the term "byte" for a unit of digital information. In 1990, he was recognized as a computer pioneer by the Institute of Electrical and Electronics Engineers.

The IBM System/360 architecture is the model independent architecture for the entire S/360 line of mainframe computers, including but not limited to the instruction set architecture. The elements of the architecture are documented in the IBM System/360 Principles of Operation and the IBM System/360 I/O Interface Channel to Control Unit Original Equipment Manufacturers' Information manuals.

In computer architecture, 256-bit integers, memory addresses, or other data units are those that are 256 bits wide. Also, 256-bit central processing unit (CPU) and arithmetic logic unit (ALU) architectures are those that are based on registers, address buses, or data buses of that size. There are currently no mainstream general-purpose processors built to operate on 256-bit integers or addresses, though a number of processors do operate on 256-bit data.

## References

1. Beebe, Nelson H. F. (2017-08-22). "Chapter I. Integer arithmetic". The Mathematical-Function Computation Handbook - Programming Using the MathCW Portable Software Library (1 ed.). Salt Lake City, UT, USA: Springer International Publishing AG. p. 970. doi:10.1007/978-3-319-64110-2. ISBN   978-3-319-64109-6. LCCN   2017947446. S2CID   30244721.
2. Dreyfus, Phillippe (1958-05-08) [1958-05-06]. Written at Los Angeles, California, USA. System design of the Gamma 60 (PDF). Western Joint Computer Conference: Contrasts in Computers. ACM, New York, NY, USA. pp. 130–133. IRE-ACM-AIEE '58 (Western). Archived (PDF) from the original on 2017-04-03. Retrieved 2017-04-03. [...] Internal data code is used: Quantitative (numerical) data are coded in a 4-bit decimal code; qualitative (alpha-numerical) data are coded in a 6-bit alphanumerical code. The internal instruction code means that the instructions are coded in straight binary code.
As to the internal information length, the information quantum is called a "catena," and it is composed of 24 bits representing either 6 decimal digits, or 4 alphanumerical characters. This quantum must contain a multiple of 4 and 6 bits to represent a whole number of decimal or alphanumeric characters. Twenty-four bits was found to be a good compromise between the minimum 12 bits, which would lead to a too-low transfer flow from a parallel readout core memory, and 36 bits or more, which was judged as too large an information quantum. The catena is to be considered as the equivalent of a character in variable word length machines, but it cannot be called so, as it may contain several characters. It is transferred in series to and from the main memory.
Not wanting to call a "quantum" a word, or a set of characters a letter, (a word is a word, and a quantum is something else), a new word was made, and it was called a "catena." It is an English word and exists in Webster's although it does not in French. Webster's definition of the word catena is, "a connected series;" therefore, a 24-bit information item. The word catena will be used hereafter.
The internal code, therefore, has been defined. Now what are the external data codes? These depend primarily upon the information handling device involved. The Gamma 60  [ fr ] is designed to handle information relevant to any binary coded structure. Thus an 80-column punched card is considered as a 960-bit information item; 12 rows multiplied by 80 columns equals 960 possible punches; is stored as an exact image in 960 magnetic cores of the main memory with 2 card columns occupying one catena. [...]
3. Blaauw, Gerrit Anne; Brooks, Jr., Frederick Phillips; Buchholz, Werner (1962). "4: Natural Data Units" (PDF). In Buchholz, Werner (ed.). Planning a Computer System – Project Stretch. McGraw-Hill Book Company, Inc. / The Maple Press Company, York, PA. pp. 39–40. LCCN   61-10466. Archived (PDF) from the original on 2017-04-03. Retrieved 2017-04-03. [...] Terms used here to describe the structure imposed by the machine design, in addition to bit , are listed below.
Byte denotes a group of bits used to encode a character, or the number of bits transmitted in parallel to and from input-output units. A term other than character is used here because a given character may be represented in different applications by more than one code, and different codes may use different numbers of bits (i.e., different byte sizes). In input-output transmission the grouping of bits may be completely arbitrary and have no relation to actual characters. (The term is coined from bite , but respelled to avoid accidental mutation to bit.)
A word consists of the number of data bits transmitted in parallel from or to memory in one memory cycle. Word size is thus defined as a structural property of the memory. (The term catena was coined for this purpose by the designers of the Bull GAMMA 60  [ fr ] computer.)
Block refers to the number of words transmitted to or from an input-output unit in response to a single input-output instruction. Block size is a structural property of an input-output unit; it may have been fixed by the design or left to be varied by the program. [...]
4. "Format" (PDF). Reference Manual 7030 Data Processing System (PDF). IBM. August 1961. pp. 50–57. Retrieved 2021-12-15.
5. Clippinger, Richard F. (1948-09-29). "A Logical Coding System Applied to the ENIAC (Electronic Numerical Integrator and Computer)". Aberdeen Proving Ground, Maryland, US: Ballistic Research Laboratories. Report No. 673; Project No. TB3-0007 of the Research and Development Division, Ordnance Department. Retrieved 2017-04-05.`{{cite web}}`: CS1 maint: url-status (link)
6. Clippinger, Richard F. (1948-09-29). "A Logical Coding System Applied to the ENIAC". Aberdeen Proving Ground, Maryland, US: Ballistic Research Laboratories. Section VIII: Modified ENIAC. Retrieved 2017-04-05.`{{cite web}}`: CS1 maint: url-status (link)
7. "4. Instruction Formats" (PDF). Intel Itanium Architecture Software Developer’s Manual. Vol. 3: Intel Itanium Instruction Set Reference. p. 3:293. Retrieved 2022-04-25. Three instructions are grouped together into 128-bit sized and aligned containers called bundles. Each bundle contains three 41-bit instruction slots and a 5-bit template field.
8. Blaauw, Gerrit Anne; Brooks, Jr., Frederick Phillips (1997). Computer Architecture: Concepts and Evolution (1 ed.). Addison-Wesley. ISBN   0-201-10557-8. (1213 pages) (NB. This is a single-volume edition. This work was also available in a two-volume version.)
9. Ralston, Anthony; Reilly, Edwin D. (1993). Encyclopedia of Computer Science (3rd ed.). Van Nostrand Reinhold. ISBN   0-442-27679-6.