Intel iAPX 432

Last updated
Intel iAPX 432
Intel logo (1968).svg
Intel Corporation logo, 1968–2006
General information
Launchedlate 1981
Discontinuedca 1985
Common manufacturer(s)
  • Intel
Performance
Max. CPU clock rate 5 MHz to 8 MHz
Intel SBC 432/100 board Intel SBC 432 100 board, component side (15162604484).jpg
Intel SBC 432/100 board
General Data Processor 43201 Intel C43201-5 chip (15597750510).jpg
General Data Processor 43201
General Data Processor 43202 Intel C43202 chip (15597749830).jpg
General Data Processor 43202

The iAPX 432 (Intel Advanced Performance Architecture) is a discontinued computer architecture introduced in 1981. [1] [NB 1] It was Intel's first 32-bit processor design. The main processor of the architecture, the general data processor, is implemented as a set of two separate integrated circuits, due to technical limitations at the time. Although some early 8086, 80186 and 80286-based systems and manuals also used the iAPX prefix for marketing reasons, the iAPX 432 and the 8086 processor lines are completely separate designs with completely different instruction sets.

Contents

The project started in 1975 as the 8800 (after the 8008 and the 8080) and was intended to be Intel's major design for the 1980s. Unlike the 8086, which was designed the following year as a successor to the 8080, the iAPX 432 was a radical departure from Intel's previous designs meant for a different market niche, and completely unrelated to the 8080 or x86 product lines.

The iAPX 432 project is considered a commercial failure for Intel, and was discontinued in 1986. [1] [3]

Description

The iAPX 432 was referred to as a "micromainframe", designed to be programmed entirely in high-level languages. [4] [5] The instruction set architecture was also entirely new and a significant departure from Intel's previous 8008 and 8080 processors as the iAPX 432 programming model is a stack machine with no visible general-purpose registers. It supports object-oriented programming, [5] garbage collection and multitasking as well as more conventional memory management directly in hardware and microcode. Direct support for various data structures is also intended to allow modern operating systems to be implemented using far less program code than for ordinary processors. Intel iMAX 432 is a discontinued operating system for the 432, [6] written entirely in Ada, and Ada was also the intended primary language for application programming. In some aspects, it may be seen as a high-level language computer architecture.

These properties and features resulted in a hardware and microcode design that was more complex than most processors of the era, especially microprocessors. However, internal and external buses are (mostly) not wider than 16-bit, and, just like in other 32-bit microprocessors of the era (such as the 68000 or the 32016), 32-bit arithmetical instructions are implemented by a 16-bit ALU, via random logic and microcode or other kinds of sequential logic. The iAPX 432 enlarged address space over the 8080 was also limited by the fact that linear addressing of data could still only use 16-bit offsets, somewhat akin to Intel's first 8086-based designs, including the contemporary 80286 (the new 32-bit segment offsets of the 80386 architecture was described publicly in detail in 1984). [NB 2]

Using the semiconductor technology of its day, Intel's engineers weren't able to translate the design into a very efficient first implementation. Along with the lack of optimization in a premature Ada compiler, this contributed to rather slow but expensive computer systems, performing typical benchmarks at roughly 1/4 the speed of the new 80286 chip at the same clock frequency (in early 1982). [7] This initial performance gap to the rather low-profile and low-priced 8086 line was probably the main reason why Intel's plan to replace the latter (later known as x86) with the iAPX 432 failed. Although engineers saw ways to improve a next generation design, the iAPX 432 capability architecture had now started to be regarded more as an implementation overhead rather than as the simplifying support it was intended to be. [7]

Originally designed for clock frequencies of up to 10 MHz, actual devices sold were specified for maximum clock speeds of 4 MHz, 5 MHz, 7 MHz and 8 MHz with a peak performance of 2 million instructions per second at 8 MHz. [8] [9]

History

Development

Intel's 432 project started in 1976, a year after the 8-bit Intel 8080 was completed and a year before their 16-bit 8086 project began. The 432 project was initially named the 8800, [5] as their next step beyond the existing Intel 8008 and 8080 microprocessors. This became a very big step. The instruction sets of these 8-bit processors were not very well fitted for typical Algol-like compiled languages. However, the major problem was their small native addressing ranges, just 16 KB for 8008 and 64 KB for 8080, far too small for many complex software systems without using some kind of bank switching, memory segmentation, or similar mechanism (which was built into the 8086, a few years later on). Intel now aimed to build a sophisticated complete system in a few LSI chips, that was functionally equal to or better than the best 32-bit minicomputers and mainframes requiring entire cabinets of older chips. This system would support multiprocessors, modular expansion, fault tolerance, advanced operating systems, advanced programming languages, very large applications, ultra reliability, and ultra security. Its architecture would address the needs of Intel's customers for a decade. [10]

The iAPX 432 development team was managed by Bill Lattin, with Justin Rattner as the lead engineer [11] [12] [13] (although one source [1] states that Fred Pollack was the lead engineer). (Rattner would later become CTO of Intel.) Initially the team worked from Santa Clara, but in March 1977 Lattin and his team of 17 engineers moved to Intel's new site in Portland. [12] Pollack later specialized in superscalarity and became the lead architect of the i686 chip Intel Pentium Pro. [1]

It soon became clear that it would take several years and many engineers to design all this. And it would similarly take several years of further progress in Moore's Law, before improved chip manufacturing could fit all this into a few dense chips. Meanwhile, Intel urgently needed a simpler interim product to meet the immediate competition from Motorola, Zilog, and National Semiconductor. So Intel began a rushed project to design the 8086 as a low-risk incremental evolution from the 8080, using a separate design team. The mass-market 8086 shipped in 1978.

The 8086 was designed to be backward-compatible with the 8080 in the sense that 8080 assembly language could be mapped on to the 8086 architecture using a special assembler. Existing 8080 assembly source code (albeit no executable code) was thereby made upward compatible with the new 8086 to a degree. In contrast, the 432 had no software compatibility or migration requirements. The architects had total freedom to do a novel design from scratch, using whatever techniques they guessed would be best for large-scale systems and software. They applied fashionable computer science concepts from universities, particularly capability machines, object-oriented programming, high-level CISC machines, Ada, and densely encoded instructions. This ambitious mix of novel features made the chip larger and more complex. The chip's complexity limited the clock speed and lengthened the design schedule.

The core of the design — the main processor — was termed the General Data Processor (GDP) and built as two integrated circuits: one (the 43201) to fetch and decode instructions, the other (the 43202) to execute them. Most systems would also include the 43203 Interface Processor (IP) which operated as a channel controller for I/O, and an Attached Processor (AP), a conventional Intel 8086 which provided "processing power in the I/O subsystem". [4]

These were some of the largest [ clarification needed ] designs of the era. The two-chip GDP had a combined count of approximately 97,000  transistors [ citation needed ] while the single chip IP had approximately 49,000. By comparison, the Motorola 68000 (introduced in 1979) had approximately 40,000 transistors.[ citation needed ]

In 1983, Intel released two additional integrated circuits for the iAPX 432 Interconnect Architecture: the 43204 Bus Interface Unit (BIU) and 43205 Memory Control Unit (MCU). These chips allowed for nearly glueless multiprocessor systems with up to 63 nodes.

The project's failures

Some of the innovative features of the iAPX 432 were detrimental to good performance. In many cases, the iAPX 432 had a significantly slower instruction throughput than conventional microprocessors of the era, such as the National Semiconductor 32016, Motorola 68010 and Intel 80286. One problem was that the two-chip implementation of the GDP limited it to the speed of the motherboard's electrical wiring. A larger issue was the capability architecture needed large associative caches to run efficiently, but the chips had no room left for that. The instruction set also used bit-aligned variable-length instructions instead of the usual semi-fixed byte or word-aligned formats used in the majority of computer designs. Instruction decoding was therefore more complex than in other designs. Although this did not hamper performance in itself, it used additional transistors (mainly for a large barrel shifter) in a design that was already lacking space and transistors for caches, wider buses and other performance oriented features. In addition, the BIU was designed to support fault-tolerant systems, and in doing so up to 40% of the bus time was held up in wait states.

Another major problem was its immature and untuned Ada compiler. It used high-cost object-oriented instructions in every case, instead of the faster scalar instructions where it would have made sense to do so. For instance the iAPX 432 included a very expensive inter-module procedure call instruction, which the compiler used for all calls, despite the existence of much faster branch and link instructions. Another very slow call was enter_environment, which set up the memory protection. The compiler ran this for every single variable in the system, even when variables were used inside an existing environment and did not have to be checked. To make matters worse, data passed to and from procedures was always passed by value-return rather than by reference. When running the Dhrystone benchmark, parameter passing took ten times longer than all other computations combined. [14]

According to the New York Times, "the i432 ran 5 to 10 times more slowly than its competitor, the Motorola 68000". [15]

Impact and similar designs

The iAPX 432 was one of the first systems to implement the new IEEE-754 Standard for Floating-Point Arithmetic. [16]

An outcome of the failure of the 432 was that microprocessor designers concluded that object support in the chip leads to a complex design that will invariably run slowly, and the 432 was often cited as a counter-example by proponents of RISC designs. However, some hold that the OO support was not the primary problem with the 432, and that the implementation shortcomings (especially in the compiler) mentioned above would have made any CPU design slow. Since the iAPX 432 there has been only one other attempt at a similar design, the Rekursiv processor, although the INMOS Transputer's process support was similar — and very fast.[ citation needed ]

Intel had spent considerable time, money, and mindshare on the 432, had a skilled team devoted to it, and was unwilling to abandon it entirely after its failure in the marketplace. A new architect—Glenford Myers—was brought in to produce an entirely new architecture and implementation for the core processor, which would be built in a joint Intel/Siemens project (later BiiN), resulting in the i960-series processors. The i960 RISC subset became popular for a time in the embedded processor market, but the high-end 960MC and the tagged-memory 960MX were marketed only for military applications.

According to the New York Times, Intel's collaboration with HP on the Merced processor (later known as Itanium) was the company's comeback attempt for the very high-end market. [15]

Architecture

The iAPX 432 instructions have variable length, between 6 and 321 bits. [17] Unusually, they are not byte-aligned, that is, they may contain odd numbers of bits and directly follow each other without regard to byte boundaries. [5]

Object-oriented memory and capabilities

The iAPX 432 has hardware and microcode support for object-oriented programming and capability-based addressing. [18] [19] The system uses segmented memory, with up to 224 segments of up to 64  KB each, providing a total virtual address space of 240 bytes. The physical address space is 224 bytes (16  MB).

Programs are not able to reference data or instructions by address; instead they must specify a segment and an offset within the segment. Segments are referenced by access descriptors (ADs), which provide an index into the system object table and a set of rights (capabilities) governing accesses to that segment. Segments may be "access segments", which can only contain Access Descriptors, or "data segments" which cannot contain ADs. The hardware and microcode rigidly enforce the distinction between data and access segments, and will not allow software to treat data as access descriptors, or vice versa.

System-defined objects consist of either a single access segment, or an access segment and a data segment. System-defined segments contain data or access descriptors for system-defined data at designated offsets, though the operating system or user software may extend these with additional data. Each system object has a type field which is checked by microcode, such that a Port Object cannot be used where a Carrier Object is needed. User programs can define new object types which will get the full benefit of the hardware type checking, through the use of type control objects (TCOs).

In Release 1 of the iAPX 432 architecture, a system-defined object typically consisted of an access segment, and optionally (depending on the object type) a data segment specified by an access descriptor at a fixed offset within the access segment.

By Release 3 of the architecture, in order to improve performance, access segments and data segments were combined into single segments of up to 128 kB, split into an access part and a data part of 064 KB each. This reduced the number of object table lookups dramatically, and doubled the maximum virtual address space. [20]

The iAPX432 recognizes fourteen types of predefined system objects: [21] :pp.1-11–1-12

Garbage collection

Software running on the 432 does not need to explicitly deallocate objects that are no longer needed. Instead, the microcode implements part of the marking portion of Edsger Dijkstra's on-the-fly parallel garbage collection algorithm (a mark-and-sweep style collector). [22] The entries in the system object table contain the bits used to mark each object as being white, black, or grey as needed by the collector. The iMAX 432 operating system includes the software portion of the garbage collector. [23]

Instruction format

Executable instructions are contained within a system "instruction object". [21] :p.7-3 Due to instructions being bit-aligned, a 16-bit bit displacement into the instruction object allows the object to contain up to 65,536 bits (8,192 bytes) of instructions.

Instructions consist of an operator, consisting of a class and an opcode, and zero to three operand references. "The fields are organized to present information to the processor in the sequence required for decoding". More frequently used operators are encoded using fewer bits. [21] :p.7-6 The instruction begins with the 4 or 6 bit class field which indicates the number of operands, called the order of the instruction, and the length of each operand. This is optionally followed by a 0 to 4 bit format field which describes the operands (if there are no operands the format is not present). Then come zero to three operands, as described by the format. The instruction is terminated by the 0 to 5 bit opcode, if any (some classes contain only one instruction and therefore have no opcode). "The Format field permits the GDP to appear to the programmer as a zero-, one-, two-, or three-address architecture." The format field indicates that an operand is a data reference, or the top or next-to-top element of the operand stack. [21] :pp.7-3–7-5

See also

Notes

  1. Sometimes Intel Advanced Processor Architecture [2]
  2. although the 80386 chip was not mass produced until mid 1986

Related Research Articles

A complex instruction set computer is a computer architecture in which single instructions can execute several low-level operations or are capable of multi-step operations or addressing modes within single instructions. The term was retroactively coined in contrast to reduced instruction set computer (RISC) and has therefore become something of an umbrella term for everything that is not RISC, where the typical differentiating characteristic is that most RISC designs use uniform instruction length for almost all instructions, and employ strictly separate load and store instructions.

<span class="mw-page-title-main">Intel 80286</span> Microprocessor model

The Intel 80286 is a 16-bit microprocessor that was introduced on February 1, 1982. It was the first 8086-based CPU with separate, non-multiplexed address and data buses and also the first with memory management and wide protection abilities. The 80286 used approximately 134,000 transistors in its original nMOS (HMOS) incarnation and, just like the contemporary 80186, it could correctly execute most software written for the earlier Intel 8086 and 8088 processors.

<span class="mw-page-title-main">Intel 8080</span> 8-bit microprocessor

The Intel 8080 ("eighty-eighty") is the second 8-bit microprocessor designed and manufactured by Intel. It first appeared in April 1974 and is an extended and enhanced variant of the earlier 8008 design, although without binary compatibility. The initial specified clock rate or frequency limit was 2 MHz, with common instructions using 4, 5, 7, 10, or 11 cycles. As a result, the processor is able to execute several hundred thousand instructions per second. Two faster variants, the 8080A-1 and 8080A-2, became available later with clock frequency limits of 3.125 MHz and 2.63 MHz respectively. The 8080 needs two support chips to function in most applications: the i8224 clock generator/driver and the i8228 bus controller. It is implemented in N-type metal–oxide–semiconductor logic (NMOS) using non-saturated enhancement mode transistors as loads thus demanding a +12 V and a −5 V voltage in addition to the main transistor–transistor logic (TTL) compatible +5 V.

<span class="mw-page-title-main">Intel 8086</span> 16-bit microprocessor

The 8086 is a 16-bit microprocessor chip designed by Intel between early 1976 and June 8, 1978, when it was released. The Intel 8088, released July 1, 1979, is a slightly modified chip with an external 8-bit data bus, and is notable as the processor used in the original IBM PC design.

<span class="mw-page-title-main">Intel 8088</span> Intel microprocessor model

The Intel 8088 microprocessor is a variant of the Intel 8086. Introduced on June 1, 1979, the 8088 has an eight-bit external data bus instead of the 16-bit bus of the 8086. The 16-bit registers and the one megabyte address range are unchanged, however. In fact, according to the Intel documentation, the 8086 and 8088 have the same execution unit (EU)—only the bus interface unit (BIU) is different. The 8088 was used in the original IBM PC and in IBM PC compatible clones.

x86 Family of instruction set architectures

x86 is a family of complex instruction set computer (CISC) instruction set architectures initially developed by Intel based on the Intel 8086 microprocessor and its 8088 variant. The 8086 was introduced in 1978 as a fully 16-bit extension of Intel's 8-bit 8080 microprocessor, with memory segmentation as a solution for addressing more memory than can be covered by a plain 16-bit address. The term "x86" came into being because the names of several successors to Intel's 8086 processor end in "86", including the 80186, 80286, 80386 and 80486 processors. Colloquially, their names were "186", "286", "386" and "486".

<span class="mw-page-title-main">Zilog Z80</span> 8-bit microprocessor

The Z80 is an 8-bit microprocessor introduced by Zilog as the startup company's first product. The Z80 was conceived by Federico Faggin in late 1974 and developed by him and his 11 employees starting in early 1975. The first working samples were delivered in March 1976, and it was officially introduced on the market in July 1976. With the revenue from the Z80, the company built its own chip factories and grew to over a thousand employees over the following two years.

In computing, endianness is the order or sequence of bytes of a word of digital data in computer memory or data communication which is identified by describing the impact of the "first" bytes, meaning at the smallest address or sent first. Endianness is primarily expressed as big-endian (BE) or little-endian (LE). A big-endian system stores the most significant byte of a word at the smallest memory address and the least significant byte at the largest. A little-endian system, in contrast, stores the least-significant byte at the smallest address. Bi-endianness is a feature supported by numerous computer architectures that feature switchable endianness in data fetches and stores or for instruction fetches. Other orderings are generically called middle-endian or mixed-endian.

<span class="mw-page-title-main">Intel 8008</span> 8-bit microprocessor

The Intel 8008 is an early byte-oriented microprocessor designed by Computer Terminal Corporation (CTC), implemented and manufactured by Intel, and introduced in April 1972. It is an 8-bit CPU with an external 14-bit address bus that could address 16 KB of memory. Originally known as the 1201, the chip was commissioned by Computer Terminal Corporation (CTC) to implement an instruction set of their design for their Datapoint 2200 programmable terminal. As the chip was delayed and did not meet CTC's performance goals, the 2200 ended up using CTC's own TTL-based CPU instead. An agreement permitted Intel to market the chip to other customers after Seiko expressed an interest in using it for a calculator.

<span class="mw-page-title-main">Intel 8085</span> 8-bit microprocessor by Intel

The Intel 8085 ("eighty-eighty-five") is an 8-bit microprocessor produced by Intel and introduced in March 1976. It is software-binary compatible with the more-famous Intel 8080 with only two minor instructions added to support its added interrupt and serial input/output features. However, it requires less support circuitry, allowing simpler and less expensive microcomputer systems to be built. The "5" in the part number highlighted the fact that the 8085 uses a single +5-volt (V) power supply by using depletion-mode transistors, rather than requiring the +5 V, −5 V and +12 V supplies needed by the 8080. This capability matched that of the competing Z80, a popular 8080-derived CPU introduced the year before. These processors could be used in computers running the CP/M operating system.

In computing, protected mode, also called protected virtual address mode, is an operational mode of x86-compatible central processing units (CPUs). It allows system software to use features such as segmentation, virtual memory, paging and safe multi-tasking designed to increase an operating system's control over application software.

x86 assembly language is the name for the family of assembly languages which provide some level of backward compatibility with CPUs back to the Intel 8008 microprocessor, which was launched in April 1972. It is used to produce object code for the x86 class of processors.

<span class="mw-page-title-main">Intel i960</span>

Intel's i960 was a RISC-based microprocessor design that became popular during the early 1990s as an embedded microcontroller. It became a best-selling CPU in that segment, along with the competing AMD 29000. In spite of its success, Intel stopped marketing the i960 in the late 1990s, as a result of a settlement with DEC whereby Intel received the rights to produce the StrongARM CPU. The processor continues to be used for a few military applications.

<span class="mw-page-title-main">NEC V20</span> 16-bit microprocessor introduced by NEC in 1984

The NEC V20 is a microprocessor that was designed and produced by NEC. It is both pin compatible and object code compatible with the Intel 8088, with an instruction set architecture (ISA) similar to that of the Intel 80188 with some extensions. The V20 was introduced in March 1984.

The x86 instruction set refers to the set of instructions that x86-compatible microprocessors support. The instructions are usually part of an executable program, often stored as a computer file and executed on the processor.

<span class="mw-page-title-main">Intel 8087</span> Floating-point microprocessor made by Intel

The Intel 8087, announced in 1980, was the first floating-point coprocessor for the 8086 line of microprocessors. The purpose of the chip was to speed up floating-point arithmetic operations, such as addition, subtraction, multiplication, division, and square root. It also computes transcendental functions such as exponential, logarithmic or trigonometric calculations. The performance enhancements were from approximately 20% to over 500%, depending on the specific application. The 8087 could perform about 50,000 FLOPS using around 2.4 watts.

The TMS9900 was one of the first commercially available, single-chip 16-bit microprocessors. Introduced in June 1976, it implemented Texas Instruments' TI-990 minicomputer architecture in a single-chip format, and was initially used for low-end models of that lineup.

Memory segmentation is an operating system memory management technique of dividing a computer's primary memory into segments or sections. In a computer system using segmentation, a reference to a memory location includes a value that identifies a segment and an offset within that segment. Segments or sections are also used in object files of compiled programs when they are linked together into a program image and when the image is loaded into memory.

<span class="mw-page-title-main">History of general-purpose CPUs</span> History of processors used in general purpose computers

The history of general-purpose CPUs is a continuation of the earlier history of computing hardware.

An instruction set architecture (ISA) is an abstract model of a computer, also referred to as computer architecture. A realization of an ISA is called an implementation. An ISA permits multiple implementations that may vary in performance, physical size, and monetary cost ; because the ISA serves as the interface between software and hardware. Software that has been written for an ISA can run on different implementations of the same ISA. This has enabled binary compatibility between different generations of computers to be easily achieved, and the development of computer families. Both of these developments have helped to lower the cost of computers and to increase their applicability. For these reasons, the ISA is one of the most important abstractions in computing today.

References

  1. 1 2 3 4 Dvorak, John C. "Whatever Happened to the Intel iAPX432?" . Retrieved 19 July 2012.
  2. Defining Intel: 25 Years / 25 Events (PDF). Intel. 1993. p. 14.
  3. Smith, Eric. "Intel iAPX-432 Micromainframe" . Retrieved Dec 6, 2015.
  4. 1 2 Intel Corporation (1981). Introduction to the iAPX 432 Architecture (PDF). pp. iii.
  5. 1 2 3 4 Stanley Mazor (January–March 2010). "Intel's 8086". IEEE Annals of the History of Computing. 32 (1): 75–79. doi:10.1109/MAHC.2010.22. S2CID   16451604.
  6. Kahn, Kevin C.; Corwin, William M.; Dennis, T. Don; d'Hooge, Herman; Hubka, David E.; Hutchins, Linda A.; Montague, John T.; Pollack, Fred J. (December 1981). "iMAX: A multiprocessor operating system for an object-based computer" (PDF). ACM SIGOPS Operating Systems Review. 15 (5): 127–136. doi:10.1145/800216.806601. S2CID   9245960.
  7. 1 2 Colwell, Robert; Gehringer, Edward (1988). "Performance Effects of Architectural Complexity in the Intel 432" (PDF). Transactions on Computer Systems. 6 (3): 296–339. doi:10.1145/45059.214411. S2CID   8314206.
  8. Intel iAPX-432 Micromainframe
  9. Maliniak, Lisa (October 21, 2002). "Ten Notable Flops: Learning From Mistakes". Electronic Design.
  10. David King; Liang Zhou; Jon Bryson; David Dickson (April 15, 1999). "Intel iAPX 432 - Computer Science 460 - Final Project".
  11. Mazor, Stanley (2010). "Intel's 8086". IEEE Annals of the History of Computing. 32: 75. doi:10.1109/MAHC.2010.22. S2CID   16451604.
  12. 1 2 Heike Mayer (2012). Entrepreneurship and Innovation in Second Tier Regions. Edward Elgar Publishing. pp. 100–101. ISBN   978-0-85793-869-5.
  13. Defining Intel: 25 years / 25 events (PDF). Intel. 1993. p. 14.
  14. Mark Smotherman, Overview of Intel 432
  15. 1 2 John Markoff, Inside Intel, The Future Is Riding on A New Chip, April 5, 1998
  16. Vickery, Christopher. "IEEE-754 Reference Material" . Retrieved Dec 5, 2015.
  17. Tadao Ichikawa; H. Tsubotani (1992). Language Architectures and Programming Environments. World Scientific. p. 127. ISBN   978-981-02-1012-0.
  18. Levy, Henry M. (1984). "Chapter 9: The Intel iAPX 432" (PDF). Capability-Based Computer Systems. Digital Press.
  19. Organick, Elliott I. (1983). "Chapter 4: i432 Object Structures for Program Execution". A programmer's view of the Intel 432 system. New York: McGraw-Hill. ISBN   0-07-047719-1. OCLC   9110667.
  20. Glenford J Meyers (1982). "Section VI: Overview of the Intel iAPX432 Architecture". Advances in Computer Architecture (2nd ed.). Wiley. ISBN   978-0-471-07878-4.
  21. 1 2 3 4 Intel Corporation (1983). iAPX432 GENERAL DATA PROCESSOR ARCHITECTURE REFERENCE MANUAL (PDF). Retrieved Nov 16, 2015.
  22. Dijkstra, E. W.; Lamport, L.; Martin, A. J.; Scholten, C. S.; Steffens, E. F. M. (November 1978). "On-the-fly garbage collection: an exercise in cooperation". Communications of the ACM . 21 (11): 966–975. doi: 10.1145/359642.359655 . S2CID   8017272.
  23. "iMAX 432 Reference Manual" (PDF). Intel. May 1982.