Tagged architecture

Last updated

In computer science, a tagged architecture is a type of computer architecture where every word of memory constitutes a tagged union, being divided into a number of bits of data, and a tag section that describes the type of the data: how it is to be interpreted, and, if it is a reference, the type of the object that it points to. [1] [2] [3]

Contents

Architecture

In contrast, program and data memory are indistinguishable in the von Neumann architecture, making the way the memory is referenced critical to interpret the correct meaning.

Notable examples of American tagged architectures were the Lisp machines, which had tagged pointer support at the hardware and opcode level, the Burroughs large systems, which have a data-driven tagged and descriptor-based architecture, and the non-commercial Rice Computer. [4] Both the Burroughs and Lisp machine are examples of high-level language computer architectures, where the tagging is used to support types from a high-level language at the hardware level.

In addition to this, the original Xerox Smalltalk implementation used the least-significant bit of each 16-bit word as a tag bit: if it was clear then the hardware would accept it as an aligned memory address while if it was set it was treated as a (shifted) 15-bit integer. Current Intel documentation mentions that the lower bits of a memory address might be similarly used by some interpreter-based systems.

In the Soviet Union, the Elbrus series of supercomputers pioneered the use of tagged architectures in 1973.

See also

Related Research Articles

In computer science, an integer is a datum of integral data type, a data type that represents some range of mathematical integers. Integral data types may be of different sizes and may or may not be allowed to contain negative values. Integers are commonly represented in a computer as a group of binary digits (bits). The size of the grouping varies so the set of integer sizes available varies between different types of computers. Computer hardware nearly always provides a way to represent a processor register or memory address as an integer.

<span class="mw-page-title-main">Lisp machine</span> Computer specialized in running Lisp

Lisp machines are general-purpose computers designed to efficiently run Lisp as their main software and programming language, usually via hardware support. They are an example of a high-level language computer architecture, and in a sense, they were the first commercial single-user workstations. Despite being modest in number Lisp machines commercially pioneered many now-commonplace technologies, including effective garbage collection, laser printing, windowing systems, computer mice, high-resolution bit-mapped raster graphics, computer graphic rendering, and networking innovations such as Chaosnet. Several firms built and sold Lisp machines in the 1980s: Symbolics, Lisp Machines Incorporated, Texas Instruments, and Xerox. The operating systems were written in Lisp Machine Lisp, Interlisp (Xerox), and later partly in Common Lisp.

In processor design, microcode serves as an intermediary layer situated between the central processing unit (CPU) hardware and the programmer-visible instruction set architecture of a computer. It consists of a set of hardware-level instructions that implement higher-level machine code instructions or control internal finite-state machine sequencing in many digital processing components. While microcode is utilized in general-purpose CPUs in contemporary desktops, it also functions as a fallback path for scenarios that the faster hardwired control unit is unable to manage.

Symbolics, Inc., was a privately held American computer manufacturer that acquired the assets of the former company and continues to sell and maintain the Open Genera Lisp system and the Macsyma computer algebra system.

In computing, endianness is the order or sequence of bytes of a word of digital data in computer memory or data communication which is identified by describing the impact of the "first" bytes, meaning at the smallest address or sent first. Endianness is primarily expressed as big-endian (BE) or little-endian (LE). A big-endian system stores the most significant byte of a word at the smallest memory address and the least significant byte at the largest. A little-endian system, in contrast, stores the least-significant byte at the smallest address. Bi-endianness is a feature supported by numerous computer architectures that feature switchable endianness in data fetches and stores or for instruction fetches. Other orderings are generically called middle-endian or mixed-endian.

The Honeywell 6000 series computers were rebadged versions of General Electric's 600-series mainframes manufactured by Honeywell International, Inc. from 1970 to 1989. Honeywell acquired the line when it purchased GE's computer division in 1970 and continued to develop them under a variety of names for many years. In 1989, Honeywell sold its computer division to the French company Groupe Bull who continued to market compatible machines.

<span class="mw-page-title-main">Interpreter (computing)</span> Program that executes source code without a separate compilation step

In computer science, an interpreter is a computer program that directly executes instructions written in a programming or scripting language, without requiring them previously to have been compiled into a machine language program. An interpreter generally uses one of the following strategies for program execution:

  1. Parse the source code and perform its behavior directly;
  2. Translate source code into some efficient intermediate representation or object code and immediately execute that;
  3. Explicitly execute stored precompiled bytecode made by a compiler and matched with the interpreter Virtual Machine.
<span class="mw-page-title-main">IBM 704</span> Vacuum-tube computer system

The IBM 704 is a large digital mainframe computer introduced by IBM in 1954. It was the first mass-produced computer with hardware for floating-point arithmetic. The IBM 704 Manual of operation states:

The type 704 Electronic Data-Processing Machine is a large-scale, high-speed electronic calculator controlled by an internally stored program of the single address type.

In computer programming, CAR (car) and CDR (cdr) are primitive operations on cons cells introduced in the Lisp programming language. A cons cell is composed of two pointers; the car operation extracts the first pointer, and the cdr operation extracts the second.

<span class="mw-page-title-main">PDP-6</span> 36-bit mainframe computer (1964–1966)

The PDP-6, short for Programmed Data Processor model 6, is a computer developed by Digital Equipment Corporation (DEC) during 1963 and first delivered in the summer of 1964. It was an expansion of DEC's existing 18-bit systems to use a 36-bit data word, which was at that time a common word size for large machines like IBM mainframes. The system was constructed using the same germanium transistor-based System Module layout as DEC's earlier machines, like the PDP-1 and PDP-4.

The Burroughs Large Systems Group produced a family of large 48-bit mainframes using stack machine instruction sets with dense syllables. The first machine in the family was the B5000 in 1961, which was optimized for compiling ALGOL 60 programs extremely well, using single-pass compilers. The B5000 evolved into the B5500 and the B5700. Subsequent major redesigns include the B6500/B6700 line and its successors, as well as the separate B8500 line.

A processor register is a quickly accessible location available to a computer's processor. Registers usually consist of a small amount of fast storage, although some registers have specific hardware functions, and may be read-only or write-only. In computer architecture, registers are typically addressed by mechanisms other than main memory, but may in some cases be assigned a memory address e.g. DEC PDP-10, ICT 1900.

In computer programming, bounds checking is any method of detecting whether a variable is within some bounds before it is used. It is usually used to ensure that a number fits into a given type, or that a variable being used as an array index is within the bounds of the array. A failed bounds check usually results in the generation of some sort of exception signal.

The Burroughs B1000 Series was a series of mainframe computers, built by the Burroughs Corporation, and originally introduced in the 1970s with continued software development until 1987. The series consisted of three major generations which were the B1700, B1800, and B1900 series machines. They were also known as the Burroughs Small Systems, by contrast with the Burroughs Large Systems and the Burroughs Medium Systems.

Descriptors are an architectural feature of Burroughs large systems, including the current Unisys Clearpath/MCP systems. Apart from being stack- and tag-based, a notable architectural feature of these systems is that it is descriptor-based. Descriptors are the means of having data that does not reside on the stack as for arrays and objects. Descriptors are also used for string data as in compilers and commercial applications.

In computer architecture, 31-bit integers, memory addresses, or other data units are those that are 31 bits wide.

In computer science, a tagged pointer is a pointer with additional data associated with it, such as an indirection bit or reference count. This additional data is often "folded" into the pointer, meaning stored inline in the data representing the address, taking advantage of certain properties of memory addressing. The name comes from "tagged architecture" systems, which reserved bits at the hardware level to indicate the significance of each word; the additional data is called a "tag" or "tags", though strictly speaking "tag" refers to data specifying a type, not other data; however, the usage "tagged pointer" is ubiquitous.

The Rice Institute Computer, also known as the Rice Computer or R1, was a 54-bit tagged architecture digital computer built during 1958–1961 on the campus of Rice University, Houston, Texas, United States. Operating as Rice's primary computer until the middle 1960s, the Rice Institute Computer was decommissioned in 1971. The system initially used vacuum tubes and semiconductor diodes for its logic circuits; some later peripherals were built in solid-state emitter-coupled logic. It was designed by Martin H. Graham. A copy of the machine called OSAGE was built and operated at the University of Oklahoma.

A high-level language computer architecture (HLLCA) is a computer architecture designed to be targeted by a specific high-level programming language (HLL), rather than the architecture being dictated by hardware considerations. It is accordingly also termed language-directed computer design, coined in McKeeman (1967) and primarily used in the 1960s and 1970s. HLLCAs were popular in the 1960s and 1970s, but largely disappeared in the 1980s. This followed the dramatic failure of the Intel 432 (1981) and the emergence of optimizing compilers and reduced instruction set computer (RISC) architectures and RISC-like complex instruction set computer (CISC) architectures, and the later development of just-in-time compilation (JIT) for HLLs. A detailed survey and critique can be found in Ditzel & Patterson (1980).

<span class="mw-page-title-main">John Iliffe (computer designer)</span> British computer designer (1931-2020)

John Kenneth Iliffe was a British computer designer who worked on the design and evaluation of computers that supported fine-grained memory protection and object management. He implemented, evaluated and refined such designs in the Rice Institute Computer, R1 (1958–61) and the ICL Basic Language Machine (1963–68). A key feature in the architectures of both machines was control by the hardware of the formation and use of memory references so that the memory could be seen as a collection of data objects of defined sizes whose integrity is protected from the consequences of errors in address calculation, such as overrunning memory pointers.

References

  1. The Memory Management Glossary: Tagged architecture
  2. Feustel, Edward A. (July 1973). "On the Advantages of Tagged Architecture" (PDF). IEEE Transactions on Computers : 644–656. Archived (PDF) from the original on May 23, 2013. Retrieved January 21, 2013.
  3. Feustel, Edward A. (1972). "The Rice Research Computer -- A tagged architecture" (PDF). Proceedings of the 1972 Spring Joint Computer Conference. American Federation of Information Processing Societies (AFIPS). pp. 369–377. Archived (PDF) from the original on September 24, 2015. Retrieved July 27, 2014.
  4. Thornton, Adam. "A Brief History of the Rice Computer 1959-1971". Archived from the original on February 24, 2008. Retrieved January 31, 2013. (mostly written in [or before] 1994, and archived by the Wayback Machine on a date indicated [by "20080224"] in the URL)