Iron law of processor performance

Last updated

In computer architecture, the iron law of processor performance (or simply iron law of performance) describes the performance trade-off between complexity and the number of primitive instructions that processors use to perform calculations. [1] This formulation of the trade-off spurred the development[ citation needed ] of Reduced Instruction Set Computers (RISC) whose instruction set architectures (ISAs) leverage a smaller set of core instructions to improve performance. The term was coined by Douglas Clark [2] based on research performed by Clark and Joel Emer in the 1980s. [3]

Contents

Explanation

The performance of a processor is the time it takes to execute a program: . This can be further broken down into three factors: [4]

Selection of an instruction set architecture affects , whereas is largely determined by the manufacturing technology. Classic Complex Instruction Set Computer (CISC) ISAs optimized by providing a larger set of more complex CPU instructions. Generally speaking, however, complex instructions inflate the number of clock cycles per instruction because they must be decoded into simpler micro-operations actually performed by the hardware. After converting X86 binary to the micro-operations used internally, the total number of operations is close to what is produced for a comparable RISC ISA. [5] The iron law of processor performance makes this trade-off explicit and pushes for optimization of as a whole, not just a single component.

While the iron law is credited for sparking the development of RISC architectures,[ citation needed ] it does not imply that a simpler ISA is always faster. If that were the case, the fastest ISA would consist of simple binary logic. A single CISC instruction can be faster than the equivalent set of RISC instructions when it enables multiple micro-operations to be performed in a single clock cycle. In practice, however, the regularity of RISC instructions allowed a pipelined implementation where the total execution time of an instruction was (typically) ~5 clock cycles, but each instruction followed the previous instruction ~1 clock cycle later [ citation needed ]. CISC processors can also achieve higher performance using techniques such as modular extensions, predictive logic, compressed instructions, and macro-operation fusion. [6] [5] [7]

See also

Related Research Articles

A complex instruction set computer is a computer architecture in which single instructions can execute several low-level operations or are capable of multi-step operations or addressing modes within single instructions. The term was retroactively coined in contrast to reduced instruction set computer (RISC) and has therefore become something of an umbrella term for everything that is not RISC, where the typical differentiating characteristic is that most RISC designs use uniform instruction length for almost all instructions, and employ strictly separate load and store instructions.

<span class="mw-page-title-main">Reduced instruction set computer</span> Processor executing one instruction in minimal clock cycles

In computer science, a reduced instruction set computer (RISC) is a computer architecture designed to simplify the individual instructions given to the computer to accomplish tasks. Compared to the instructions given to a complex instruction set computer (CISC), a RISC computer might require more instructions in order to accomplish a task because the individual instructions are written in simpler code. The goal is to offset the need to process more instructions by increasing the speed of each instruction, in particular by implementing an instruction pipeline, which may be simpler to achieve given simpler instructions.

In computer science, an instruction set architecture (ISA), also called computer architecture, is an abstract model of a computer. A device that executes instructions described by that ISA, such as a central processing unit (CPU), is called an implementation.

IA-64 is the instruction set architecture (ISA) of the Itanium family of 64-bit Intel microprocessors. The basic ISA specification originated at Hewlett-Packard (HP), and was subsequently implemented by Intel in collaboration with HP. The first Itanium processor, codenamed Merced, was released in 2001.

In linear algebra, an n-by-n square matrix A is called invertible, if there exists an n-by-n square matrix B such that

In the history of computer hardware, some early reduced instruction set computer central processing units used a very similar architectural solution, now called a classic RISC pipeline. Those CPUs were: MIPS, SPARC, Motorola 88000, and later the notional CPU DLX invented for education.

<span class="mw-page-title-main">Microarchitecture</span> Component of computer engineering

In computer science and computer engineering, microarchitecture, also called computer organization and sometimes abbreviated as µarch or uarch, is the way a given instruction set architecture (ISA) is implemented in a particular processor. A given ISA may be implemented with different microarchitectures; implementations may vary due to different goals of a given design or due to shifts in technology.

In computer architecture, cycles per instruction is one aspect of a processor's performance: the average number of clock cycles per instruction for a program or program fragment. It is the multiplicative inverse of instructions per cycle.

Jet engine performance is the understanding of how, in jet engines, a particular fuel flow produces a definite amount of thrust at a particular point in the flight envelope. The behaviour of a jet engine and its effect both on the aircraft and the environment is categorized into many different engineering areas. These areas include combustion which relates to emissions, and rotor dynamics which is the area relating to the origin of vibrations transmitted to the airframe. Performance is the subject of a specialized discipline within aero-engine design and development.

In computer architecture, frequency scaling is the technique of increasing a processor's frequency so as to enhance the performance of the system containing the processor in question. Frequency ramping was the dominant force in commodity processor performance increases from the mid-1980s until roughly the end of 2004.

<span class="mw-page-title-main">History of general-purpose CPUs</span> History of processors used in general purpose computers

The history of general-purpose CPUs is a continuation of the earlier history of computing hardware.

Byzantine fault tolerant protocols are algorithms that are robust to arbitrary types of failures in distributed algorithms. The Byzantine agreement protocol is an essential part of this task. The constant-time quantum version of the Byzantine protocol, is described below.

An instruction set architecture (ISA) is an abstract model of a computer, also referred to as computer architecture. A realization of an ISA is called an implementation. An ISA permits multiple implementations that may vary in performance, physical size, and monetary cost ; because the ISA serves as the interface between software and hardware. Software that has been written for an ISA can run on different implementations of the same ISA. This has enabled binary compatibility between different generations of computers to be easily achieved, and the development of computer families. Both of these developments have helped to lower the cost of computers and to increase their applicability. For these reasons, the ISA is one of the most important abstractions in computing today.

An Advanced Encryption Standard instruction set is now integrated into many processors. The purpose of the instruction set is to improve the speed and security of applications performing encryption and decryption using the Advanced Encryption Standard (AES).

<span class="mw-page-title-main">Computer architecture</span> Set of rules describing computer system

In computer science, computer architecture is a description of the structure of a computer system made from component parts. It can sometimes be a high-level description that ignores details of the implementation. At a more detailed level, the description may include the instruction set architecture design, microarchitecture design, logic design, and implementation.

<span class="mw-page-title-main">Streeter–Phelps equation</span>

The Streeter–Phelps equation is used in the study of water pollution as a water quality modelling tool. The model describes how dissolved oxygen (DO) decreases in a river or stream along a certain distance by degradation of biochemical oxygen demand (BOD). The equation was derived by H. W. Streeter, a sanitary engineer, and Earle B. Phelps, a consultant for the U.S. Public Health Service, in 1925, based on field data from the Ohio River. The equation is also known as the DO sag equation.

IBM POWER is a reduced instruction set computer (RISC) instruction set architecture (ISA) developed by IBM. The name is an acronym for Performance Optimization With Enhanced RISC.

IBM Power microprocessors are designed and sold by IBM for servers and supercomputers. The name "POWER" was originally presented as an acronym for "Performance Optimization With Enhanced RISC". The Power line of microprocessors has been used in IBM's RS/6000, AS/400, pSeries, iSeries, System p, System i, and Power Systems lines of servers and supercomputers. They have also been used in data storage devices and workstations by IBM and by other server manufacturers like Bull and Hitachi.

The quantum algorithm for linear systems of equations, also called HHL algorithm, designed by Aram Harrow, Avinatan Hassidim, and Seth Lloyd, is a quantum algorithm published in 2008 for solving linear systems. The algorithm estimates the result of a scalar measurement on the solution vector to a given linear system of equations.

RISC-V is an open standard instruction set architecture (ISA) based on established reduced instruction set computer (RISC) principles. Unlike most other ISA designs, RISC-V is provided under royalty-free open-source licenses. A number of companies are offering or have announced RISC-V hardware; open source operating systems with RISC-V support are available, and the instruction set is supported in several popular software toolchains.

References

  1. Eeckhout, Lieven (2010). Computer Architecture Performance Evaluation Methods. Morgan & Claypool. pp. 5–6. ISBN   9781608454679 . Retrieved 9 March 2021.
  2. Joel, Emer (2021-04-13), YArch 2021 Keynote , retrieved 2021-09-02
  3. A Characterization of Processor Performance in the VAX-11/780, Joel S. Emer, Douglas W. Clark, 1984, IEEE
  4. Asanovic, Krste (2019). "Lecture 4 - Pipelining" (PDF). Department of Electrical Engineering and Computer Sciences at UC Berkeley (Lecture Slides). p. 2. Archived from the original on 2020-03-11. Retrieved 2020-03-11.
  5. 1 2 Celio, Christopher; Dabbelt, Palmer; Patterson, David A.; Asanović, Krste (2016-07-08). "The Renewed Case for the Reduced Instruction Set Computer: Avoiding ISA Bloat with Macro-Op Fusion for RISC-V". arXiv: 1607.02318 [cs.AR].
  6. Engheim, Erik (2020-12-28). "The Genius of RISC-V Microprocessors". Medium. Retrieved 2021-03-11.
  7. Celio, Christopher (2016-07-26), A Comparison of RISC V, ARM, and x86 , retrieved 2021-03-11