In computer science, a tagged architecture is a type of computer architecture where every word of memory constitutes a tagged union, being divided into a number of bits of data, and a tag section that describes the type of the data: how it is to be interpreted, and, if it is a reference, the type of the object that it points to. [1] [2] [3]
Some early systems use tagging of data in memory but do not have all of the characteristics now consider to be part of tagged architectures.
The RCA 601 [4] has a 3-bit tag register and a 3-bit tag for every 24-bit half-word. Every instruction can request a test for equal or unequal tag, and cause a maskable interrupt if the specified match fails. There is no architectural connection between the tag and the contents of the half-word; it is strictly determined by the software.
The Burroughs B5000 [5] , B5500 [6] and B5700 have 48-bit words with no appended tag field. However, while there are no tag fields for character, instruction or numeric (floating point) words, all of the control word formats include a 3-bit tag. However, the replacement architecture, starting with the B6500, does have a tag for every word.
In contrast, program and data memory are indistinguishable in the von Neumann architecture, making the way the memory is referenced critical to interpret the correct meaning.
Notable examples of American tagged architectures were the Lisp machines, which had tagged pointer support at the hardware and opcode level, the Burroughs B6500 and successors, which have a data-driven tagged and descriptor-based architecture, and the non-commercial Rice Computer. [7] Both the Burroughs and Lisp machine are examples of high-level language computer architectures, where the tagging is used to support types from a high-level language at the hardware level.
In addition to this, the original Xerox Smalltalk implementation used the least-significant bit of each 16-bit word as a tag bit: if it was clear then the hardware would accept it as an aligned memory address while if it was set it was treated as a (shifted) 15-bit integer. Current Intel documentation mentions that the lower bits of a memory address might be similarly used by some interpreter-based systems.
In the Soviet Union, the Elbrus series of supercomputers pioneered the use of tagged architectures in 1973.
The Burroughs Corporation was a major American manufacturer of business equipment. The company was founded in 1886 as the American Arithmometer Company by William Seward Burroughs. In 1986, it merged with Sperry UNIVAC to form Unisys. The company's history paralleled many of the major developments in computing. At its start, it produced mechanical adding machines, and later moved into programmable ledgers and then computers. It was one of the largest producers of mainframe computers in the world, also producing related equipment including typewriters and printers.
Lisp machines are general-purpose computers designed to efficiently run Lisp as their main software and programming language, usually via hardware support. They are an example of a high-level language computer architecture. In a sense, they were the first commercial single-user workstations. Despite being modest in number Lisp machines commercially pioneered many now-commonplace technologies, including effective garbage collection, laser printing, windowing systems, computer mice, high-resolution bit-mapped raster graphics, computer graphic rendering, and networking innovations such as Chaosnet. Several firms built and sold Lisp machines in the 1980s: Symbolics, Lisp Machines Incorporated, Texas Instruments, and Xerox. The operating systems were written in Lisp Machine Lisp, Interlisp (Xerox), and later partly in Common Lisp.
In computing, virtual memory, or virtual storage, is a memory management technique that provides an "idealized abstraction of the storage resources that are actually available on a given machine" which "creates the illusion to users of a very large (main) memory".
Multiprocessing is the use of two or more central processing units (CPUs) within a single computer system. The term also refers to the ability of a system to support more than one processor or the ability to allocate tasks between them. There are many variations on this basic theme, and the definition of multiprocessing can vary with context, mostly as a function of how CPUs are defined.
The IBM 704 is the model name of a large digital mainframe computer introduced by IBM in 1954. Designed by John Backus and Gene Amdahl, it was the first mass-produced computer with hardware for floating-point arithmetic. The IBM 704 Manual of operation states:
The type 704 Electronic Data-Processing Machine is a large-scale, high-speed electronic calculator controlled by an internally stored program of the single address type.
The Burroughs Large Systems Group produced a family of large 48-bit mainframes using stack machine instruction sets with dense syllables. The first machine in the family was the B5000 in 1961, which was optimized for compiling ALGOL 60 programs extremely well, using single-pass compilers. The B5000 evolved into the B5500 and the B5700. Subsequent major redesigns include the B6500/B6700 line and its successors, as well as the separate B8500 line.
Memory segmentation is an operating system memory management technique of dividing a computer's primary memory into segments or sections. In a computer system using segmentation, a reference to a memory location includes a value that identifies a segment and an offset within that segment. Segments or sections are also used in object files of compiled programs when they are linked together into a program image and when the image is loaded into memory.
The Burroughs B1000 Series was a series of mainframe computers, built by the Burroughs Corporation, and originally introduced in the 1970s with continued software development until 1987. The series consisted of three major generations which were the B1700, B1800, and B1900 series machines. They were also known as the Burroughs Small Systems, by contrast with the Burroughs Large Systems and the Burroughs Medium Systems.
The Burroughs B6x00-7x00 instruction set includes the set of valid operations for the Burroughs B6500, B7500 and later Burroughs large systems, including the current Unisys Clearpath/MCP systems; it does not include the instruction for other Burroughs large systems including the B5000, B5500, B5700 and the B8500. These unique machines have a distinctive design and instruction set. Each word of data is associated with a type, and the effect of an operation on that word can depend on the type. Further, the machines are stack based to the point that they had no user-addressable registers.
Descriptors are an architectural feature of Burroughs large systems, including the current Unisys Clearpath/MCP systems. Apart from being stack- and tag-based, a notable architectural feature of these systems is that they are descriptor-based. Descriptors are the means of having data that does not reside on the stack such as arrays and objects. Descriptors are also used for string data as in compilers and commercial applications.
In computer architecture, 48-bit integers can represent 281,474,976,710,656 (248 or 2.814749767×1014) discrete values. This allows an unsigned binary integer range of 0 through 281,474,976,710,655 (248 − 1) or a signed two's complement range of −140,737,488,355,328 (−247) through 140,737,488,355,327 (247 − 1). A 48-bit memory address can directly address every byte of 256 terabytes of storage. 48-bit can refer to any other data unit that consumes 48 bits (6 octets) in width. Examples include 48-bit CPU and ALU architectures that are based on registers, address buses, or data buses of that size.
In computer science, a tagged pointer is a pointer with additional data associated with it, such as an indirection bit or reference count. This additional data is often "folded" into the pointer, meaning stored inline in the data representing the address, taking advantage of certain properties of memory addressing. The name comes from "tagged architecture" systems, which reserved bits at the hardware level to indicate the significance of each word; the additional data is called a "tag" or "tags", though strictly speaking "tag" refers to data specifying a type, not other data; however, the usage "tagged pointer" is ubiquitous.
CPU modes are operating modes for the central processing unit of most computer architectures that place restrictions on the type and scope of operations that can be performed by instructions being executed by the CPU. For example, this design allows an operating system to run with more privileges than application software by running the operating systems and applications in different modes.
The Burroughs B5000 was the first stack machine and also the first computer with a segmented virtual memory. The Burroughs B5000 instruction set includes the set of valid operations for the B5000, B5500 and B5700. It is not compatible with the B6500, B7500, B8500 or their successors.
The history of general-purpose CPUs is a continuation of the earlier history of computing hardware.
The Rice Institute Computer, also known as the Rice Computer or R1, was a 54-bit tagged architecture digital computer built during 1958–1961 on the campus of Rice University, Houston, Texas, United States. Operating as Rice's primary computer until the middle 1960s, the Rice Institute Computer was decommissioned in 1971. The system initially used vacuum tubes and semiconductor diodes for its logic circuits; some later peripherals were built in solid-state emitter-coupled logic. It was designed by Martin H. Graham. A copy of the machine called OSAGE was built and operated at the University of Oklahoma.
A decimal computer is a computer that can represent numbers and addresses in decimal and that provides instructions to operate on those numbers and addresses directly in decimal, without conversion to a pure binary representation. Some also had a variable wordlength, which enabled operations on numbers with a large number of digits.
An asymmetric multiprocessing system is a multiprocessor computer system where not all of the multiple interconnected central processing units (CPUs) are treated equally. For example, a system might allow only one CPU to execute operating system code or might allow only one CPU to perform I/O operations. Other AMP systems might allow any CPU to execute operating system code and perform I/O operations, so that they were symmetric with regard to processor roles, but attached some or all peripherals to particular CPUs, so that they were asymmetric with respect to the peripheral attachment.
A high-level language computer architecture (HLLCA) is a computer architecture designed to be targeted by a specific high-level programming language (HLL), rather than the architecture being dictated by hardware considerations. It is accordingly also termed language-directed computer design, coined in McKeeman (1967) and primarily used in the 1960s and 1970s. HLLCAs were popular in the 1960s and 1970s, but largely disappeared in the 1980s. This followed the dramatic failure of the Intel 432 (1981) and the emergence of optimizing compilers and reduced instruction set computer (RISC) architectures and RISC-like complex instruction set computer (CISC) architectures, and the later development of just-in-time compilation (JIT) for HLLs. A detailed survey and critique can be found in Ditzel & Patterson (1980).
John Kenneth Iliffe was a British computer designer who worked on the design and evaluation of computers that supported fine-grained memory protection and object management. He implemented, evaluated and refined such designs in the Rice Institute Computer, R1 (1958–61) and the ICL Basic Language Machine (1963–68). A key feature in the architectures of both machines was control by the hardware of the formation and use of memory references so that the memory could be seen as a collection of data objects of defined sizes whose integrity is protected from the consequences of errors in address calculation, such as overrunning memory pointers.