Koomey's law

Last updated

Computations per kWh, from 1946 to 2009 Koomeys law graph, made by Koomey.jpg
Computations per kWh, from 1946 to 2009

Koomey's law describes a trend in the history of computing hardware: for about a half-century, the number of computations per joule of energy dissipated doubled about every 1.57 years. Professor Jonathan Koomey described the trend in a 2010 paper in which he wrote that "at a fixed computing load, the amount of battery you need will fall by a factor of two every year and a half." [1]

Contents

This trend had been remarkably stable since the 1950s (R2 of over 98%). But in 2011, Koomey re-examined this data [2] and found that after 2000, the doubling slowed to about once every 2.6 years. This is related to the slowing [3] of Moore's law, the ability to build smaller transistors; and the end around 2005 of Dennard scaling, the ability to build smaller transistors with constant power density.

"The difference between these two growth rates is substantial. A doubling every year and a half results in a 100-fold increase in efficiency every decade. A doubling every two and a half years yields just a 16-fold increase", Koomey wrote. [4]

Implications

The implications of Koomey's law are that the amount of battery needed for a fixed computing load will fall by a factor of 100 every decade. [5] As computing devices become smaller and more mobile, this trend may be even more important than improvements in raw processing power for many applications. Furthermore, energy costs are becoming an increasing factor in the economics of data centers, further increasing the importance of Koomey's law.

The slowing of Koomey's law has implications for energy use in information and communications technology. However, because computers do not run at peak output continuously, the effect of this slowing may not be seen for a decade or more. [6] Koomey writes that "as with any exponential trend, this one will eventually end...in a decade or so, energy use will once again be dominated by the power consumed when a computer is active. And that active power will still be hostage to the physics behind the slowdown in Moore's Law."

History

Koomey was the lead author of the article in IEEE Annals of the History of Computing that first documented the trend. [1] At about the same time, Koomey published a short piece about it in IEEE Spectrum . [7]

It was further discussed in MIT Technology Review, [8] and in a post by Erik Brynjolfsson on the "Economics of Information" blog, [5] and at The Economist online. [9]

The trend was previously known for digital signal processors, and it was then named "Gene's law". The name came from Gene Frantz, an electrical engineer at Texas Instruments. Frantz had documented that power dissipation in DSPs had been reduced by half every 18 months, over a 25-year period. [10] [11]

Slowing and end of Koomey's law

Latest studies indicate that Koomey's Law has slowed to doubling every 2.6 years. [2] This rate is a statistical average over many technologies and many years, but there are exceptions. For example, in 2020 AMD reported that, since 2014, the company has managed to improve the efficiency of its mobile processors by a factor of 31.7, which is a doubling rate of 1.2 years. [12] In June 2020, Koomey responded to the report, writing, "I have reviewed the data and can report that AMD exceeded the 25×20 goal it set in 2014 through improved design, superior optimization, and a laser-like focus on energy efficiency." [12]

By the second law of thermodynamics and Landauer's principle, irreversible computing cannot continue to be made more energy efficient forever. Assuming that the energy efficiency of computing will continue to double every 2.6 years, and taking the most efficient super computer as of 2022, [13] the Landauer bound will be reached around 2080. Thus, after this point, Koomey's law can no longer hold. Landauer's principle, however, does not constrain the efficiency of reversible computing. This, in conjunction with other beyond CMOS computing technologies, could permit continued advances in efficiency.

See also

Related Research Articles

<span class="mw-page-title-main">Moore's law</span> Observation on the growth of integrated circuit capacity

Moore's law is the observation that the number of transistors in an integrated circuit (IC) doubles about every two years. Moore's law is an observation and projection of a historical trend. Rather than a law of physics, it is an empirical relationship linked to gains from experience in production.

Processor power dissipation or processing unit power dissipation is the process in which computer processors consume electrical energy, and dissipate this energy in the form of heat due to the resistance in the electronic circuits.

Power management is a feature of some electrical appliances, especially copiers, computers, computer CPUs, computer GPUs and computer peripherals such as monitors and printers, that turns off the power or switches the system to a low-power state when inactive. In computing this is known as PC power management and is built around a standard called ACPI, this supersedes APM. All recent computers have ACPI support.

<span class="mw-page-title-main">Robert H. Dennard</span> American electrical engineer (born 1932)

Robert Heath Dennard is an American electrical engineer and inventor.

<span class="mw-page-title-main">Miniaturization</span> Trend to manufacture ever smaller products and devices

Miniaturization is the trend to manufacture ever smaller mechanical, optical and electronic products and devices. Examples include miniaturization of mobile phones, computers and vehicle engine downsizing. In electronics, the exponential scaling and miniaturization of silicon MOSFETs leads to the number of transistors on an integrated circuit chip doubling every two years, an observation known as Moore's law. This leads to MOS integrated circuits such as microprocessors and memory chips being built with increasing transistor density, faster performance, and lower power consumption, enabling the miniaturization of electronic devices.

Wirth's law is an adage on computer performance which states that software is getting slower more rapidly than hardware is becoming faster.

Landauer's principle is a physical principle pertaining to the lower theoretical limit of energy consumption of computation. It holds that an irreversible change in information stored in a computer, such as merging two computational paths, dissipates a minimum amount of heat to its surroundings.

<span class="mw-page-title-main">Edge computing</span> Distributed computing paradigm

Edge computing is a distributed computing paradigm that brings computation and data storage closer to the sources of data. This is expected to improve response times and save bandwidth. Edge computing is an architecture rather than a specific technology, and a topology- and location-sensitive form of distributed computing.

Edholm's law, proposed by and named after Phil Edholm, refers to the observation that the three categories of telecommunication, namely wireless (mobile), nomadic and wired networks (fixed), are in lockstep and gradually converging. Edholm's law also holds that data rates for these telecommunications categories increase on similar exponential curves, with the slower rates trailing the faster ones by a predictable time lag. Edholm's law predicts that the bandwidth and data rates double every 18 months, which has proven to be true since the 1970s. The trend is evident in the cases of Internet, cellular (mobile), wireless LAN and wireless personal area networks.

In computing, bandwidth is the maximum rate of data transfer across a given path. Bandwidth may be characterized as network bandwidth, data bandwidth, or digital bandwidth.

In computing, performance per watt is a measure of the energy efficiency of a particular computer architecture or computer hardware. Literally, it measures the rate of computation that can be delivered by a computer for every watt of power consumed. This rate is typically measured by performance on the LINPACK benchmark when trying to compare between computing systems: an example using this is the Green500 list of supercomputers. Performance per watt has been suggested to be a more sustainable measure of computing than Moore’s Law.

<span class="mw-page-title-main">Mark Papermaster</span> American business executive (born 1961)

Mark D. Papermaster is an American business executive currently serving as the chief technology officer (CTO) and executive vice president for Technology and Engineering at Advanced Micro Devices (AMD). On January 25, 2019 he was promoted to AMD's Executive Vice President. Papermaster previously worked at IBM from 1982 to 2008, where he was closely involved in the development of PowerPC technology and served two years as vice president of IBM's blade server division. Papermaster's decision to move from IBM to Apple Inc. in 2008 became central to a court case considering the validity and scope of an employee non-compete clause in the technology industry. He became senior vice president of devices hardware engineering at Apple in 2009, with oversight for devices such as the iPhone. In 2010 he left Apple and joined Cisco Systems as a VP of the company's silicon engineering development. Papermaster joined AMD on October 24, 2011, assuming oversight for all of AMD's technology teams and the creation of all of AMD's products, and AMD's corporate technical direction.

Jonathan Koomey is a researcher who identified a long-term trend in energy-efficiency of computing that has come to be known as Koomey's law. From 1984 to 2003, Dr. Koomey was at Lawrence Berkeley National Laboratory, where he founded and led the End-Use Forecasting group, and has been a visiting professor at Stanford University, Yale University, and the University of California, Berkeley. He has also been a lecturer and a consulting professor at Stanford and a lecturer at UC Berkeley. He is a graduate of Harvard University (A.B) and University of California at Berkeley. His research focuses on the economics of greenhouse gas emissions and the effects of information technology on resource use. He has also published extensively on critical thinking skills and business analytics.

The IEEE International Electron Devices Meeting (IEDM) is an annual micro- and nanoelectronics conference held each December that serves as a forum for reporting technological breakthroughs in the areas of semiconductor and related device technologies, design, manufacturing, physics, modeling and circuit-device interaction.

In semiconductor electronics, Dennard scaling, also known as MOSFET scaling, is a scaling law which states roughly that, as transistors get smaller, their power density stays constant, so that the power use stays in proportion with area; both voltage and current scale (downward) with length. The law, originally formulated for MOSFETs, is based on a 1974 paper co-authored by Robert H. Dennard, after whom it is named.

Heterogeneous computing refers to systems that use more than one kind of processor or core. These systems gain performance or energy efficiency not just by adding the same type of processors, but by adding dissimilar coprocessors, usually incorporating specialized processing capabilities to handle particular tasks.

Superconducting logic refers to a class of logic circuits or logic gates that use the unique properties of superconductors, including zero-resistance wires, ultrafast Josephson junction switches, and quantization of magnetic flux (fluxoid). As of 2023, superconducting computing is a form of cryogenic computing, as superconductive electronic circuits require cooling to cryogenic temperatures for operation, typically below 10 kelvin. Often superconducting computing is applied to quantum computing, with an important application known as superconducting quantum computing.

<span class="mw-page-title-main">IEEE Rebooting Computing</span> Initiative to rethink the concept of computing

The Task Force on Rebooting Computing (TFRC), housed within IEEE Computer Society, is the new home for the IEEE Rebooting Computing Initiative. Founded in 2013 by the IEEE Future Directions Committee, Rebooting Computing has provided an international, interdisciplinary environment where experts from a wide variety of computer-related fields can come together to explore novel approaches to future computing. IEEE Rebooting Computing began as a global initiative launched by IEEE that proposes to rethink the concept of computing through a holistic look at all aspects of computing, from the device itself to the user interface. As part of its work, IEEE Rebooting Computing provides access to various resources like conferences and educational events, feature and scholarly articles, reports, and videos.

Samuel Naffziger is an American electrical engineer who has been employed at Advanced Micro Devices in Fort Collins, Colorado since 2006. He was named a Fellow of the Institute of Electrical and Electronics Engineers (IEEE) in 2014 for his leadership in the development of power management and low-power processor technologies. He is also the Senior Vice President and Product Technology Architect at AMD.

<span class="mw-page-title-main">Huang's law</span> Computer science observation

Huang's law is an observation in computer science and engineering that advancements in graphics processing units (GPU) are growing at a rate much faster than with traditional central processing units (CPU). The observation is in contrast to Moore's law that predicted the number of transistors in a dense integrated circuit (IC) doubles about every two years. Huang's law states that the performance of GPUs will more than double every two years. The hypothesis is subject to questions about its validity.

References

  1. 1 2 Koomey, Jonathan; Berard, Stephen; Sanchez, Marla; Wong, Henry (March 29, 2010), "Implications of Historical Trends in the Electrical Efficiency of Computing", IEEE Annals of the History of Computing , 33 (3): 46–54, doi:10.1109/MAHC.2010.28, ISSN   1058-6180, S2CID   8305701 .
  2. 1 2 Koomey, Jonathan G. (November 29, 2016). "Our latest on energy efficiency of computing over time, now out in Electronic Design".
  3. Clark, Don (July 16, 2015). "Intel Rechisels the Tablet on Moore's Law". Wall Street Journal.
  4. Naffziger, Sam; Koomey, Jonathan (November 29, 2016). "Energy Efficiency of Computing: What's Next?". Electronic Design.
  5. 1 2 Brynjolfsson, Erik (September 12, 2011). "Is Koomey's Law eclipsing Moore's Law?". Economics of Information Blog. MIT.
  6. Koomey, Jonathan; Naffziger, Samuel (March 31, 2015). "Moore's Law Might Be Slowing Down, But Not Energy Efficiency". IEEE Spectrum.
  7. Koomey, J. G. (February 26, 2010), "Outperforming Moore's Law", IEEE Spectrum , 47 (3): 68, doi:10.1109/MSPEC.2010.5421913, S2CID   36759624 .
  8. Greene, Kate (September 12, 2011). "A New and Improved Moore's Law". MIT Technology Review .
  9. "Computing power—A deeper law than Moore's?". The Economist online. October 10, 2011.
  10. Farncombe, Troy; Iniewski, Kris (2013), "§1.7.4 Power Dissipation", Medical Imaging: Technology and Applications, CRC Press, pp. 16–18, ISBN   978-1-4665-8263-7 .
  11. Frantz, G. (2000), "Digital signal processor trends", IEEE Micro , 20 (6): 52–59, doi:10.1109/40.888703
  12. 1 2 Thurrott, Paul (June 25, 2020). "AMD Delivers a Major Mobile Efficiency Milestone".
  13. "Top 500 - Efficiency, Power, ..." Archived from the original on May 10, 2022. Retrieved May 26, 2022.

Further reading