Berkeley RISC

Last updated

Berkeley RISC is one of two seminal research projects into reduced instruction set computer (RISC) based microprocessor design taking place under the Defense Advanced Research Projects Agency VLSI Project. RISC was led by David Patterson (who coined the term RISC) at the University of California, Berkeley between 1980 and 1984. [1] The other project took place a short distance away at Stanford University under their MIPS effort starting in 1981 and running until 1984.

Contents

Berkeley's project was so successful that it became the name for all similar designs to follow; even the MIPS would become known as a "RISC processor". The Berkeley RISC design was later commercialized by Sun Microsystems as the SPARC architecture, and inspired the ARM architecture. [2]

The RISC concept

Both RISC and MIPS were developed from the realization that the vast majority of programs used only a small minority of a processor's available instruction set. In a famous 1978 paper, Andrew S. Tanenbaum demonstrated that a complex 10,000 line high-level program could be represented using a simplified instruction set architecture using an 8-bit fixed-length opcode. [3] This was roughly the same conclusion reached at IBM, whose studies of their own code running on mainframes like the IBM 360 used only a small subset of all the instructions available. Both of these studies suggested that one could produce a much simpler CPU that would still run most real-world code. Another finding, not fully explored at the time, was Tanenbaum's note that 81% of the constants were either 0, 1, or 2. [3]

These realizations were taking place as the microprocessor market was moving from 8 to 16-bit with 32-bit designs about to appear. Those designs were premised on the goal of replicating some of the more well-respected existing ISAs from the mainframe and minicomputer world. For instance, the National Semiconductor NS32000 started out as an effort to produce a single-chip implementation of the VAX-11, which had a rich instruction set with a wide variety of addressing modes. The Motorola 68000 was similar in general layout. To provide this rich set of instructions, CPUs used microcode to decode the user-visible instruction into a series of internal operations. This microcode represented perhaps 14 to 13 of the transistors of the overall design.

If, as these other papers suggested, the majority of these opcodes would never be used in practice, then this significant resource was being wasted. If one were to simply build the same processor with the unused instructions removed it would be smaller and thus less expensive, while if one instead used those transistors to improve performance instead of decoding instructions that would not be used, a faster processor was possible. The RISC concept was to take advantage of both of these, producing a CPU that was the same level of complexity as the 68000, but much faster.

To do this, RISC concentrated on adding many more registers, small bits of memory holding temporary values that can be accessed very rapidly. This contrasts with normal main memory, which might take several cycles to access. By providing more registers, and making sure the compilers actually used them, programs should run much faster. Additionally, the speed of the processor would be more closely defined by its clock speed, because less of its time would be spent waiting for memory accesses. Transistor for transistor, a RISC design would outperform a conventional CPU.

On the downside, the instructions being removed were generally performing several "sub-instructions". For instance, the ADD instruction of a traditional design would generally come in several flavours, one that added the numbers in two registers and placed it in a third, another that added numbers found in main memory and put the result in a register, etc. The RISC designs, on the other hand, included only a single flavour of any particular instruction, the ADD, for instance, would always use registers for all operands. This forced the programmer to write additional instructions to load the values from memory, if needed, making a RISC program "less dense".

In the era of expensive memory this was a real concern, notably because memory was also much slower than the CPU. Since a RISC design's ADD would actually require four instructions (two loads, an add, and a save), the machine would have to do much more memory access to read the extra instructions, potentially slowing it down considerably. This was offset to some degree by the fact that the new designs used what was then a very large instruction word of 32-bits, allowing small constants to be folded directly into the instruction instead of having to be loaded separately. Additionally, the results of one operation are often used soon after by another, so by skipping the write to memory and storing the result in a register, the program did not end up much larger, and could in theory run much faster. For instance, a string of instructions carrying out a series of mathematical operations might require only a few loads from memory, while the majority of the numbers being used would be either constants in the instructions, or intermediate values left in the registers from prior calculations. In a sense, in this technique some registers are used to shadow memory locations, so that the registers are used as proxies for the memory locations until their final values after a group of instructions have been determined.

To the casual observer, it was not clear that the RISC concept would improve performance, and it might even make it worse. The only way to be sure was to simulate it. The results of such simulations were clear; in test after test, every simulation showed an enormous overall benefit in performance from this design.

Where the two projects, RISC and MIPS, differed was in the handling of the registers. MIPS simply added lots of registers and left it to the compilers (or assembly language programmers) to make use of them. RISC, on the other hand, added circuitry to the CPU to assist the compiler. RISC used the concept of register windows, in which the entire "register file" was broken down into blocks, allowing the compiler to "see" one block for global variables, and another for local variables.

The idea was to make one particularly common instruction, the procedure call, extremely easy to implement. Almost all programming languages use a system known as an activation record or stack frame for each procedure which contains the address from which the procedure was called, the data (parameters) that were passed in, and space for any result values that need to be returned. In the vast majority of cases these frames are small, typically with three or fewer inputs and one or no outputs (and sometimes an input is reused as an output). In the Berkeley design, then, a register window was a set of several registers, enough of them that the entire procedure stack frame would most likely fit entirely within the register window.

In this case, the call into and return from a procedure is simple and extremely fast. A single instruction is called to set up a new block of registers—a new register window—and then, with operands passed into the procedure in the "low end" of the new window, the program jumps into the procedure. On return, the results are placed in the window at the same end, and the procedure exits. The register windows are set up to overlap at the ends, so that the results from the call simply "appear" in the window of the caller, with no data having to be copied. Thus the common procedure call does not have to interact with main memory, greatly accelerating it.

On the downside, this approach means that procedures with large numbers of local variables are problematic, and ones with fewer lead to registers—an expensive resource—being wasted. There are a finite number of register windows in the design, e.g., eight, so procedures can only be nested that many levels deep before the register windowing mechanism reaches its limit; once the last window is reached, no new window can be set up for another nested call. And if procedures are only nested a few levels deep, registers in the windows above the deepest call nesting level can never be accessed at all, so these are completely wasted. It was Stanford's work on compilers that led them to ignore the register window concept, believing that an efficient compiler could make better use of the registers than a fixed system in hardware. (The same reasoning would apply for a smart assembly language programmer.)

RISC I

RISC I die shot. Most of the chip is occupied by the register file (bottom left area). Control logic only occupies the small top right corner. RISC-I Gold die shot.jpg
RISC I die shot. Most of the chip is occupied by the register file (bottom left area). Control logic only occupies the small top right corner.

The first attempt to implement the RISC concept was originally named Gold. Work on the design started in 1980 as part of a VLSI design course, but the then-complicated design crashed almost all existing design tools. The team had to spend considerable amounts of time improving or re-writing the tools, and even with these new tools it took just under an hour to extract the design on a VAX-11/780.

The final design, named RISC I, was published in Association for Computing Machinery (ACM) International Symposium on Computer Architecture (ISCA) in 1981. It had 44,500 transistors implementing 31 instructions and a register file containing 78 32-bit registers. This allowed for six register windows containing 14 registers. Of those 14 registers, 4 were overlapped from the prior window. The total is then: 10*6 registers in windows + 18 globals=78 registers total. The control and instruction decode section occupied only 6% of the die, whereas the typical design of the era used about 50% for the same role. The register file took up most of that space. [4]

RISC I also featured a two-stage instruction pipeline for additional speed, but without the complex instruction re-ordering of more modern designs. This makes conditional branches a problem, because the compiler has to fill the instruction following a conditional branch (the so-called branch delay slot ), with something selected to be "safe" (i.e., not dependent on the outcome of the conditional). Sometimes the only suitable instruction in this case is NOP . A notable number of later RISC-style designs still require the consideration of branch delay.

After a month of validation and debugging, the design was sent to the innovative MOSIS service for production on June 22, 1981, using a 2 μm (2,000 nm) process. A variety of delays forced them to abandon their masks four separate times, and wafers with working examples did not arrive back at Berkeley until May 1982. The first working RISC I "computer" (actually a checkout board) ran on June 11. In testing, the chips proved to have lesser performance than expected. In general, an instruction would take 2 μs to complete, while the original design allotted for about .4 μs (five times as fast). The precise reasons for this problem were never fully explained. However, throughout testing it was clear that certain instructions did run at the expected speed, suggesting the problem was physical, not logical.

Had the design worked at full speed, performance would have been excellent. Simulations using a variety of small programs compared the 4 MHz RISC I to the 5 MHz 32-bit VAX 11/780 and the 5 MHz 16-bit Zilog Z8000 showed this clearly. Program size was about 30% larger than the VAX but very close to that of the Z8000, validating the argument that the higher code density of CISC designs was not actually all that impressive in reality. In terms of overall performance, the RISC I was twice as fast as the VAX, and about four times that of the Z8000. The programs ended up performing about the same overall number of memory accesses because the large register file dramatically improved the odds the needed operand was already on-chip.

It is important to put this performance in context. Even though the RISC design had run slower than the VAX, it made no difference to the importance of the design. RISC allowed for the production of a true 32-bit processor on a real chip die using what was already an older fab. Traditional designs simply could not do this; with so much of the chip surface dedicated to decoder logic, a true 32-bit design like the Motorola 68020 required newer fabs before becoming practical. Using the same fabs, RISC I could have largely outperformed the competition.

On February 12, 2015, IEEE installed a plaque at UC Berkeley to commemorate the contribution of RISC-I. [5] The plaque reads:

RISC II

RISC II die shot RISC-II Blue die shot.jpg
RISC II die shot

While the RISC I design ran into delays, work at Berkeley had already turned to the new Blue design. Work on Blue progressed slower than Gold, due both to the lack of a pressing need now that Gold was going to fab, and to changeovers in the classes and students staffing the effort. This pace also allowed them to add in several new features that would end up improving the design considerably.

The key difference was simpler cache circuitry that eliminated one line per bit (from three to two), dramatically shrinking the register file size. The change also required much tighter bus timing, but this was a small price to pay and in order to meet the needs several other parts of the design were sped up as well.

The savings due to the new design were tremendous. Whereas Gold contained a total of 78 registers in 6 windows, Blue contained 138 registers broken into 8 windows of 16 registers each, with another 10 globals. This expansion of the register file increases the chance that a given procedure can fit all of its local storage in registers, and increase the nesting depth. Nevertheless, the larger register file required fewer transistors, and the final Blue design, fabbed as RISC II, implemented all of the RISC instruction set with only 40,760 transistors. [6]

The other major change was to include an instruction-format expander, which invisibly "up-converted" 16-bit instructions into a 32-bit format.[ citation needed ] This allowed smaller instructions, typically things with one or no operands, like NOP, to be stored in memory in a smaller 16-bit format, and for two such instructions to be packed into a single machine word. The instructions would be invisibly expanded back to 32-bit versions before they reached the arithmetic logic unit (ALU), meaning that no changes were needed in the core logic. This simple technique yielded a surprising 30% improvement in code density, making an otherwise identical program on Blue run faster than on Gold due to the decreased number of memory accesses.

RISC II proved to be much more successful in silicon and in testing outperformed almost all minicomputers on almost all tasks. For instance, performance ranged from 85% of VAX speed to 256% on a variety of loads. RISC II was also benched against the famous Motorola 68000, then considered to be the best commercial chip implementation, and outperformed it by 140% to 420%.

Follow-ons

Work on the original RISC designs ended with RISC II, but the concept lived on at Berkeley. The basic core was re-used in SOAR in 1984, basically a RISC converted to run Smalltalk (in the same way that it could be claimed RISC ran C), and later in the similar VLSI-BAM that ran Prolog instead of Smalltalk. Another effort was SPUR, which was a full set of chips needed to build a full 32-bit workstation.

RISC is less famous, but more influential, for being the basis of the commercial SPARC processor design from Sun Microsystems. It was the SPARC that first clearly demonstrated the power of the RISC concept; when they shipped in the first Sun-4s they outperformed anything on the market. This led to virtually every Unix vendor hurrying for a RISC design of their own, leading to designs like the DEC Alpha and PA-RISC, while Silicon Graphics (SGI) purchased MIPS Computer Systems. By 1986, most large chip vendors followed, working on efforts like the Motorola 88000, Fairchild Clipper, AMD 29000 and the PowerPC. On February 13, 2015, IEEE installed a plaque at Oracle Corporation in Santa Clara. [7] It reads

Techniques developed for and alongside the idea of the reduced instruction set have also been adopted in successively more powerful implementations and extensions of the traditional "complex" x86 architecture. Much of a modern microprocessor's transistor count is devoted to large caches, many pipeline stages, superscalar instruction dispatch, branch prediction and other modern techniques which are applicable regardless of instruction architecture. The amount of silicon dedicated to instruction decoding on a modern x86 implementation is proportionately quite small, so the distinction between "complex" and RISC processor implementations has become blurred.

See also

Related Research Articles

Processor design is a subfield of computer science and computer engineering (fabrication) that deals with creating a processor, a key component of computer hardware.

A complex instruction set computer is a computer architecture in which single instructions can execute several low-level operations or are capable of multi-step operations or addressing modes within single instructions. The term was retroactively coined in contrast to reduced instruction set computer (RISC) and has therefore become something of an umbrella term for everything that is not RISC, where the typical differentiating characteristic is that most RISC designs use uniform instruction length for almost all instructions, and employ strictly separate load and store instructions.

<span class="mw-page-title-main">DEC Alpha</span> 64-bit RISC instruction set architecture

Alpha is a 64-bit reduced instruction set computer (RISC) instruction set architecture (ISA) developed by Digital Equipment Corporation (DEC). Alpha was designed to replace 32-bit VAX complex instruction set computers (CISC) and to be a highly competitive RISC processor for Unix workstations and similar markets.

In processor design, microcode serves as an intermediary layer situated between the central processing unit (CPU) hardware and the programmer-visible instruction set architecture of a computer, also known as its machine code. It consists of a set of hardware-level instructions that implement the higher-level machine code instructions or control internal finite-state machine sequencing in many digital processing components. While microcode is utilized in Intel and AMD general-purpose CPUs in contemporary desktops and laptops, it functions only as a fallback path for scenarios that the faster hardwired control unit is unable to manage.

<span class="mw-page-title-main">Reduced instruction set computer</span> Processor executing one instruction in minimal clock cycles

In electronics and computer science, a reduced instruction set computer (RISC) is a computer architecture designed to simplify the individual instructions given to the computer to accomplish tasks. Compared to the instructions given to a complex instruction set computer (CISC), a RISC computer might require more instructions in order to accomplish a task because the individual instructions are written in simpler code. The goal is to offset the need to process more instructions by increasing the speed of each instruction, in particular by implementing an instruction pipeline, which may be simpler to achieve given simpler instructions.

<span class="mw-page-title-main">VAX</span> Line of computers sold by Digital Equipment Corporation

VAX is a series of computers featuring a 32-bit instruction set architecture (ISA) and virtual memory that was developed and sold by Digital Equipment Corporation (DEC) in the late 20th century. The VAX-11/780, introduced October 25, 1977, was the first of a range of popular and influential computers implementing the VAX ISA. The VAX family was a huge success for DEC, with the last members arriving in the early 1990s. The VAX was succeeded by the DEC Alpha, which included several features from VAX machines to make porting from the VAX easier.

The 88000 is a RISC instruction set architecture developed by Motorola during the 1980s. The MC88100 arrived on the market in 1988, some two years after the competing SPARC and MIPS. Due to the late start and extensive delays releasing the second-generation MC88110, the m88k achieved very limited success outside of the MVME platform and embedded controller environments. When Motorola joined the AIM alliance in 1991 to develop the PowerPC, further development of the 88000 ended.

The Intel i860 is a RISC microprocessor design introduced by Intel in 1989. It is one of Intel's first attempts at an entirely new, high-end instruction set architecture since the failed Intel iAPX 432 from the beginning of the 1980s. It was the world's first million-transistor chip. It was released with considerable fanfare, slightly obscuring the earlier Intel i960, which was successful in some niches of embedded systems. The i860 never achieved commercial success and the project was terminated in the mid-1990s.

In the history of computer hardware, some early reduced instruction set computer central processing units used a very similar architectural solution, now called a classic RISC pipeline. Those CPUs were: MIPS, SPARC, Motorola 88000, and later the notional CPU DLX invented for education.

<span class="mw-page-title-main">Register window</span> CPU architecture feature to improve performance

In computer engineering, register windows are a feature which dedicates registers to a subroutine by dynamically aliasing a subset of internal registers to fixed, programmer-visible registers. Register windows are implemented to improve the performance of a processor by reducing the number of stack operations required for function calls and returns. One of the most influential features of the Berkeley RISC design, they were later implemented in instruction set architectures such as AMD Am29000, Intel i960, Sun Microsystems SPARC, and Intel Itanium.

The DLX is a RISC processor architecture designed by John L. Hennessy and David A. Patterson, the principal designers of the Stanford MIPS and the Berkeley RISC designs (respectively), the two benchmark examples of RISC design.

<span class="mw-page-title-main">DEC PRISM</span> RISC instruction set architecture

PRISM was a 32-bit RISC instruction set architecture (ISA) developed by Digital Equipment Corporation (DEC). It was the outcome of a number of DEC research projects from the 1982–1985 time-frame, and the project was subject to continually changing requirements and planned uses that delayed its introduction. This process eventually decided to use the design for a new line of Unix workstations. The arithmetic logic unit (ALU) of the microPrism version had completed design in April 1988 and samples were fabricated, but the design of other components like the floating point unit (FPU) and memory management unit (MMU) were still not complete in the summer when DEC management decided to cancel the project in favor of MIPS-based systems. An operating system codenamed MICA was developed for the PRISM architecture, which would have served as a replacement for both VAX/VMS and ULTRIX on PRISM.

The AT&T Hobbit is a microprocessor design developed by AT&T Corporation in the early 1990s. It was based on the company's CRISP design resembling the classic RISC pipeline, and which in turn grew out of the C Machine design by Bell Labs of the late 1980s. All were optimized for running code compiled from the C programming language. The design concentrates on fast instruction decoding, indexed array access, and procedure calls.

<span class="mw-page-title-main">AMD Am29000</span> Family of RISC microprocessors and microcontrollers

The AMD Am29000, commonly shortened to 29k, is a family of 32-bit RISC microprocessors and microcontrollers developed and fabricated by Advanced Micro Devices (AMD). Based on the seminal Berkeley RISC, the 29k added a number of significant improvements. They were, for a time, the most popular RISC chips on the market, widely used in laser printers from a variety of manufacturers.

<span class="mw-page-title-main">Microarchitecture</span> Component of computer engineering

In electronics, computer science and computer engineering, microarchitecture, also called computer organization and sometimes abbreviated as μarch or uarch, is the way a given instruction set architecture (ISA) is implemented in a particular processor. A given ISA may be implemented with different microarchitectures; implementations may vary due to different goals of a given design or due to shifts in technology.

<span class="mw-page-title-main">History of general-purpose CPUs</span>

The history of general-purpose CPUs is a continuation of the earlier history of computing hardware.

The R2000 is a 32-bit microprocessor chip set developed by MIPS Computer Systems that implemented the MIPS I instruction set architecture (ISA). Introduced in January 1986, it was, by a few months, the first commercial implementation of the RISC architecture. The R2000 competed with Digital Equipment Corporation (DEC) VAX minicomputers and with Motorola 68000 and Intel Corporation 80386 microprocessors. R2000 users included Ardent Computer, DEC, Silicon Graphics, Northern Telecom and MIPS's own Unix workstations.

IBM POWER is a reduced instruction set computer (RISC) instruction set architecture (ISA) developed by IBM. The name is an acronym for Performance Optimization With Enhanced RISC.

RISC-V is an open standard instruction set architecture (ISA) based on established reduced instruction set computer (RISC) principles. The project began in 2010 at the University of California, Berkeley, transferred to the RISC-V Foundation in 2015, and on to RISC-V International, a Swiss non-profit entity, in November 2019. Like several other RISC ISA, including Amber (ARMv2), OpenPOWER, OpenSPARC / LEON, and OpenRISC, RISC-V is offered under royalty-free open-source licenses. The documents defining the RISC-V instruction set architecture (ISA) are offered under the BSD License.

Since 1985, many processors implementing some version of the MIPS architecture have been designed and used widely.

References

Citations

  1. Reilly, Edwin D. (2003). Milestones in Computer Science and Information Technology . p.  50. ISBN   1573565210.
  2. Chisnal, David (2010-08-23). "Understanding ARM Architectures". Informit. Retrieved 13 October 2015.
  3. 1 2 Tanenbaum, Andrew (March 1978). "Implications of Structured Programming for Machine Architecture". Communications of the ACM. 21 (3): 237–246. doi: 10.1145/359361.359454 . S2CID   3261560.
  4. Peek, James B. (1983-06-02). The VLSI Circuitry of RISC I (PDF) (Technical report). Berkeley, CA, US: University of California at Berkeley. pp. 13, 59. CSD-83-135.
  5. "memorabilia [RISC-I Reunion]". risc.berkeley.edu. Retrieved 2020-03-19.
  6. "Berkeley Hardware Prototypes". people.eecs.berkeley.edu. Retrieved 2021-11-06.
  7. Gee, Kelvin. "Oracle to Receive IEEE Milestone Award for SPARC RISC Architecture". blogs.oracle.com. Retrieved 2020-03-19.

Bibliography