Accumulator (computing)

Last updated
Walther WSR-16 mechanical calculator. The row of digit-wheels in the carriage (at the front), is the Accumulator. Walther WSR-16 Inside the Machine.jpg
Walther WSR-16 mechanical calculator. The row of digit-wheels in the carriage (at the front), is the Accumulator.

In a computer's central processing unit (CPU), the accumulator is a register in which intermediate arithmetic logic unit results are stored.

Contents

Without a register like an accumulator, it would be necessary to write the result of each calculation (addition, multiplication, shift, etc.) to cache or main memory, perhaps only to be read right back again for use in the next operation.

Accessing memory is slower than accessing a register like an accumulator because the technology used for the large main memory is slower (but cheaper) than that used for a register. Early electronic computer systems were often split into two groups, those with accumulators and those without.

Modern computer systems often have multiple general-purpose registers that can operate as accumulators, and the term is no longer as common as it once was. However, to simplify their design, a number of special-purpose processors still use a single accumulator.

Basic concept

Mathematical operations often take place in a stepwise fashion, using the results from one operation as the input to the next. For instance, a manual calculation of a worker's weekly payroll might look something like:

  1. look up the number of hours worked from the employee's time card
  2. look up the pay rate for that employee from a table
  3. multiply the hours by the pay rate to get their basic weekly pay
  4. multiply their basic pay by a fixed percentage to account for income tax
  5. subtract that number from their basic pay to get their weekly pay after tax
  6. multiply that result by another fixed percentage to account for retirement plans
  7. subtract that number from their basic pay to get their weekly pay after all deductions

A computer program carrying out the same task would follow the same basic sequence of operations, although the values being looked up would all be stored in computer memory. In early computers, the number of hours would likely be held on a punch card and the pay rate in some other form of memory, perhaps a magnetic drum. Once the multiplication is complete, the result needs to be placed somewhere. On a "drum machine" this would likely be back to the drum, an operation that takes considerable time. Then the very next operation has to read that value back in, which introduces another considerable delay.

Accumulators dramatically improve performance in systems like these by providing a scratchpad area where the results of one operation can be fed to the next one for little or no performance penalty. In the example above, the basic weekly pay would be calculated and placed in the accumulator, which could then immediately be used by the income tax calculation. This removes one save and one read operation from the sequence, operations that generally took tens to hundreds of times as long as the multiplication itself.

Accumulator machines

An accumulator machine, also called a 1-operand machine, or a CPU with accumulator-based architecture, is a kind of CPU where, although it may have several registers, the CPU mostly stores the results of calculations in one special register, typically called "the accumulator". Almost all early[ clarification needed ] computers were accumulator machines with only the high-performance "supercomputers" having multiple registers. Then as mainframe systems gave way to microcomputers, accumulator architectures were again popular with the MOS 6502 being a notable example. Many 8-bit microcontrollers that are still popular as of 2014, such as the PICmicro and 8051, are accumulator-based machines.

Modern CPUs are typically 2-operand or 3-operand machines. The additional operands specify which one of many general-purpose registers (also called "general-purpose accumulators" [1] ) are used as the source and destination for calculations. These CPUs are not considered "accumulator machines".

The characteristic that distinguishes one register as being the accumulator of a computer architecture is that the accumulator (if the architecture were to have one) would be used as an implicit operand for arithmetic instructions. For instance, a CPU might have an instruction like: ADD memaddress that adds the value read from memory location memaddress to the value in the accumulator, placing the result back in the accumulator. The accumulator is not identified in the instruction by a register number; it is implicit in the instruction and no other register can be specified in the instruction. Some architectures use a particular register as an accumulator in some instructions, but other instructions use register numbers for explicit operand specification.

History of the computer accumulator

Any system that uses a single "memory" to store the result of multiple operations can be considered an accumulator. J. Presper Eckert refers to even the earliest adding machines of Gottfried Leibniz and Blaise Pascal as accumulator-based systems. [2] Percy Ludgate was the first to conceive a multiplier-accumulator (MAC) in his Analytical Machine of 1909. [3]

Historical convention dedicates a register to "the accumulator", an "arithmetic organ" that literally accumulates its number during a sequence of arithmetic operations:

"The first part of our arithmetic organ ... should be a parallel storage organ which can receive a number and add it to the one already in it, which is also able to clear its contents and which can store what it contains. We will call such an organ an Accumulator. It is quite conventional in principle in past and present computing machines of the most varied types, e.g. desk multipliers, standard IBM counters, more modern relay machines, the ENIAC" (Goldstine and von Neumann, 1946; p. 98 in Bell and Newell 1971).

Just a few of the instructions are, for example (with some modern interpretation):

No convention exists regarding the names for operations from registers to accumulator and from accumulator to registers. Tradition (e.g. Donald Knuth's (1973) hypothetical MIX computer), for example, uses two instructions called load accumulator from register/memory (e.g. "LDA r") and store accumulator to register/memory (e.g. "STA r"). Knuth's model has many other instructions as well.

Notable accumulator-based computers

Front panel of an IBM 701 computer with lights displaying the accumulator and other registers IBM 701console.jpg
Front panel of an IBM 701 computer with lights displaying the accumulator and other registers

The 1945 configuration of ENIAC had 20 accumulators, which could operate in parallel. [4] :46 Each one could store an eight decimal digit number and add to it (or subtract from it) a number it received. [4] :33 Most of IBM's early binary "scientific" computers, beginning with the vacuum tube IBM 701 in 1952, used a single 36-bit accumulator, along with a separate multiplier/quotient register to handle operations with longer results. The IBM 650, a decimal machine, had one 10 digit distributor and two ten-digit accumulators; the IBM 7070, a later, transistorized decimal machine had three accumulators. The IBM System/360, and Digital Equipment Corporation's PDP-6, had 16 general-purpose registers, although the PDP-6 and its successor, the PDP-10, call them accumulators.

The 12-bit PDP-8 was one of the first minicomputers to use accumulators, and inspired many later machines. [5] The PDP-8 had but one accumulator. The HP 2100 and Data General Nova had 2 and 4 accumulators. The Nova was created when this follow-on to the PDP-8 was rejected in favor of what would become the PDP-11. The Nova provided four accumulators, AC0-AC3, although AC2 and AC3 could also be used to provide offset addresses, tending towards more generality of usage for the registers. The PDP-11 had 8 general-purpose registers, along the lines of the System/360 and PDP-10; most later CISC and RISC machines provided multiple general-purpose registers.

Early 4-bit and 8-bit microprocessors such as the 4004, 8008 and numerous others, typically had single accumulators. The 8051 microcontroller has two, a primary accumulator and a secondary accumulator, where the second is used by instructions only when multiplying (MUL AB) or dividing (DIV AB); the former splits the 16-bit result between the two 8-bit accumulators, whereas the latter stores the quotient on the primary accumulator A and the remainder in the secondary accumulator B. As a direct descendant of the 8008, the 8080, and the 8086, the modern ubiquitous Intel x86 processors still uses the primary accumulator EAX and the secondary accumulator EDX for multiplication and division of large numbers. For instance, MUL ECX will multiply the 32-bit registers ECX and EAX and split the 64-bit result between EAX and EDX. However, MUL and DIV are special cases; other arithmetic-logical instructions (ADD, SUB, CMP, AND, OR, XOR, TEST) may specify any of the eight registers EAX, ECX, EDX, EBX, ESP, EBP, ESI, EDI as the accumulator (i.e. left operand and destination). This is also supported for multiply if the upper half of the result is not required. x86 is thus a fairly general register architecture, despite being based on an accumulator model. [6] The 64-bit extension of x86, x86-64, has been further generalized to 16 instead of 8 general registers.

Related Research Articles

<span class="mw-page-title-main">Assembly language</span> Low-level programming language

In computer programming, assembly language, often referred to simply as assembly and commonly abbreviated as ASM or asm, is any low-level programming language with a very strong correspondence between the instructions in the language and the architecture's machine code instructions. Assembly language usually has one statement per machine instruction (1:1), but constants, comments, assembler directives, symbolic labels of, e.g., memory locations, registers, and macros are generally also supported.

<span class="mw-page-title-main">Data General Nova</span> 16-bit minicomputer series

The Data General Nova is a series of 16-bit minicomputers released by the American company Data General. The Nova family was very popular in the 1970s and ultimately sold tens of thousands of units.

<span class="mw-page-title-main">Machine code</span> Set of instructions executed by a computer

In computer programming, machine code is computer code consisting of machine language instructions, which are used to control a computer's central processing unit (CPU). Although decimal computers were once common, the contemporary marketplace is dominated by binary computers; for those computers, machine code is "the binary representation of a computer program which is actually read and interpreted by the computer. A program in machine code consists of a sequence of machine instructions ."

<span class="mw-page-title-main">PDP-8</span> Minicomputer product line

The PDP-8 is a family of 12-bit minicomputers that was produced by Digital Equipment Corporation (DEC). It was the first commercially successful minicomputer, with over 50,000 units being sold over the model's lifetime. Its basic design follows the pioneering LINC but has a smaller instruction set, which is an expanded version of the PDP-5 instruction set. Similar machines from DEC are the PDP-12 which is a modernized version of the PDP-8 and LINC concepts, and the PDP-14 industrial controller system.

x86 Family of instruction set architectures

x86 is a family of complex instruction set computer (CISC) instruction set architectures initially developed by Intel based on the Intel 8086 microprocessor and its 8088 variant. The 8086 was introduced in 1978 as a fully 16-bit extension of Intel's 8-bit 8080 microprocessor, with memory segmentation as a solution for addressing more memory than can be covered by a plain 16-bit address. The term "x86" came into being because the names of several successors to Intel's 8086 processor end in "86", including the 80186, 80286, 80386 and 80486 processors. Colloquially, their names were "186", "286", "386" and "486".

In computer science, an instruction set architecture (ISA) is a part of the abstract model of a computer, which generally defines how software controls the CPU. A device that executes instructions described by that ISA, such as a central processing unit (CPU), is called an implementation.

A one-instruction set computer (OISC), sometimes referred to as an ultimate reduced instruction set computer (URISC), is an abstract machine that uses only one instruction – obviating the need for a machine language opcode. With a judicious choice for the single instruction and given arbitrarily many resources, an OISC is capable of being a universal computer in the same manner as traditional computers that have multiple instructions. OISCs have been recommended as aids in teaching computer architecture and have been used as computational models in structural computing research. The first carbon nanotube computer is a 1-bit one-instruction set computer.

A low-level programming language is a programming language that provides little or no abstraction from a computer's instruction set architecture—commands or functions in the language map that are structurally similar to processor's instructions. Generally, this refers to either machine code or assembly language. Because of the low abstraction between the language and machine language, low-level languages are sometimes described as being "close to the hardware". Programs written in low-level languages tend to be relatively non-portable, due to being optimized for a certain type of system architecture.

x86 assembly language is the name for the family of assembly languages which provide some level of backward compatibility with CPUs back to the Intel 8008 microprocessor, which was launched in April 1972. It is used to produce object code for the x86 class of processors.

<span class="mw-page-title-main">Index register</span> CPU register used for modifying operand addresses

An index register in a computer's CPU is a processor register used for pointing to operand addresses during the run of a program. It is useful for stepping through strings and arrays. It can also be used for holding loop iterations and counters. In some architectures it is used for read/writing blocks of memory. Depending on the architecture it may be a dedicated index register or a general-purpose register. Some instruction sets allow more than one index register to be used; in that case additional instruction fields may specify which index registers to use.

A processor register is a quickly accessible location available to a computer's processor. Registers usually consist of a small amount of fast storage, although some registers have specific hardware functions, and may be read-only or write-only. In computer architecture, registers are typically addressed by mechanisms other than main memory, but may in some cases be assigned a memory address e.g. DEC PDP-10, ICT 1900.

The x86 instruction set refers to the set of instructions that x86-compatible microprocessors support. The instructions are usually part of an executable program, often stored as a computer file and executed on the processor.

Addressing modes are an aspect of the instruction set architecture in most central processing unit (CPU) designs. The various addressing modes that are defined in a given instruction set architecture define how the machine language instructions in that architecture identify the operand(s) of each instruction. An addressing mode specifies how to calculate the effective memory address of an operand by using information held in registers and/or constants contained within a machine instruction or elsewhere.

<span class="mw-page-title-main">Intel 8087</span> Floating-point microprocessor made by Intel

The Intel 8087, announced in 1980, was the first floating-point coprocessor for the 8086 line of microprocessors. The purpose of the chip was to speed up floating-point arithmetic operations, such as addition, subtraction, multiplication, division, and square root. It also computes transcendental functions such as exponential, logarithmic or trigonometric calculations. The performance enhancements were from approximately 20% to over 500%, depending on the specific application. The 8087 could perform about 50,000 FLOPS using around 2.4 watts.

In computer engineering, an orthogonal instruction set is an instruction set architecture where all instruction types can use all addressing modes. It is "orthogonal" in the sense that the instruction type and the addressing mode vary independently. An orthogonal instruction set does not impose a limitation that requires a certain instruction to use a specific register so there is little overlapping of instruction functionality.

x87 is a floating-point-related subset of the x86 architecture instruction set. It originated as an extension of the 8086 instruction set in the form of optional floating-point coprocessors that work in tandem with corresponding x86 CPUs. These microchips have names ending in "87". This is also known as the NPX. Like other extensions to the basic instruction set, x87 instructions are not strictly needed to construct working programs, but provide hardware and microcode implementations of common numerical tasks, allowing these tasks to be performed much faster than corresponding machine code routines can. The x87 instruction set includes instructions for basic floating-point operations such as addition, subtraction and comparison, but also for more complex numerical operations, such as the computation of the tangent function and its inverse, for example.

In the x86 architecture, the CPUID instruction is a processor supplementary instruction allowing software to discover details of the processor. It was introduced by Intel in 1993 with the launch of the Pentium and SL-enhanced 486 processors.

This article describes the calling conventions used when programming x86 architecture microprocessors.

<span class="mw-page-title-main">Cosmos (operating system)</span> Toolkit for building GUI and command-line based operating systems

C# Open Source Managed Operating System (Cosmos) is a toolkit for building GUI and command-line based operating systems, written mostly in the programming language C# and small amounts of a high level assembly language named X#. Cosmos is a backronym, in that the acronym was chosen before the meaning. It is open-source software released under a BSD license.

<span class="mw-page-title-main">Arithmetic logic unit</span> Combinational digital circuit

In computing, an arithmetic logic unit (ALU) is a combinational digital circuit that performs arithmetic and bitwise operations on integer binary numbers. This is in contrast to a floating-point unit (FPU), which operates on floating point numbers. It is a fundamental building block of many types of computing circuits, including the central processing unit (CPU) of computers, FPUs, and graphics processing units (GPUs).

References

  1. "HC16 Overview". Freescale.com. Archived from the original on 28 September 2007. Retrieved 2008-09-22.
  2. J. Presper Eckert, "A Survey of Digital Computer Memory Systems", IEEE Annals of the History of Computing, 1988, pp. 15-28.
  3. "The Feasibility of Ludgate's Analytical Machine".
  4. 1 2 Haigh, Thomas; Priestley, Mark; Ropefir, Crispin (2016). ENIAC in Action: Making and Remaking the Modern Computer. MIT Press. ISBN   9780262334419.
  5. Programmed Data Processor-1 Manual (PDF), Maynard, Massachusetts: Digital Equipment Corporation, 1961, p. 7: PDP-1 system block diagram, archived (PDF) from the original on 2022-10-09, retrieved 2014-07-03
  6. Irvine, Kip R. (2007). Assembly Language for Intel-Based Computers (5th ed.). Pearson Prentice Hall. pp. 633, 622. ISBN   978-0-13-238310-3.