This article possibly contains original research .(February 2023) |
In computing, overclocking is the practice of increasing the clock rate of a computer to exceed that certified by the manufacturer. Commonly, operating voltage is also increased to maintain a component's operational stability at accelerated speeds. Semiconductor devices operated at higher frequencies and voltages increase power consumption and heat. [1] An overclocked device may be unreliable or fail completely if the additional heat load is not removed or power delivery components cannot meet increased power demands. Many device warranties state that overclocking or over-specification [2] voids any warranty, but some manufacturers allow overclocking as long as it is done (relatively) safely.[ citation needed ]
The purpose of overclocking is to increase the operating speed of a given component. [3] Normally, on modern systems, the target of overclocking is increasing the performance of a major chip or subsystem, such as the main processor or graphics controller, but other components, such as system memory (RAM) or system buses (generally on the motherboard), are commonly involved. The trade-offs are an increase in power consumption (heat), fan noise (cooling), and shortened lifespan for the targeted components. Most components are designed with a margin of safety to deal with operating conditions outside of a manufacturer's control; examples are ambient temperature and fluctuations in operating voltage. Overclocking techniques in general aim to trade this safety margin by setting the device to run in the higher end of the margin, with the understanding that temperature and voltage must be more strictly monitored and controlled by the user. Examples are that operating temperature would need to be more strictly controlled with increased cooling, as the part will be less tolerant of increased temperatures at higher speeds. Also base operating voltage may be increased to compensate for unexpected voltage drops and to strengthen signaling and timing signals, as low-voltage excursions are more likely to cause malfunctions at higher operating speeds.
While most modern devices are fairly tolerant of overclocking, all devices have finite limits. Generally, for any given voltage most parts will have a maximum "stable" speed where they still operate correctly. Past this speed, the device starts giving incorrect results, which can cause malfunctions and sporadic behavior in any system depending on it. While in a PC context, the usual result is a system crash, more subtle errors can go undetected, which over a long enough time can give unpleasant surprises such as data corruption (incorrectly calculated results, or worse writing to storage incorrectly) or the system failing only during certain specific tasks (general usage such as internet browsing and word processing appear fine, but any application wanting advanced graphics crashes the system. There might also be a chance for damage to the hardware itself).
At this point, an increase in operating voltage of a part may allow more headroom for further increases in clock speed, but the increased voltage can also significantly increase heat output, as well as shorten the lifespan further. At some point, there will be a limit imposed by the ability to supply the device with sufficient power, the user's ability to cool the part, and the device's own maximum voltage tolerance before it achieves destructive failure. Overzealous use of voltage or inadequate cooling can rapidly degrade a device's performance to the point of failure, or in extreme cases outright destroy it.
The speed gained by overclocking depends largely upon the applications and workloads being run on the system, and what components are being overclocked by the user; benchmarks for different purposes are published.
Conversely, the primary goal of underclocking is to reduce power consumption and the resultant heat generation of a device, with the trade-offs being lower clock speeds and reductions in performance. Reducing the cooling requirements needed to keep hardware at a given operational temperature has knock-on benefits such as lowering the number and speed of fans to allow quieter operation, and in mobile devices increase the length of battery life per charge. Some manufacturers underclock components of battery-powered equipment to improve battery life, or implement systems that detect when a device is operating under battery power and reduce clock frequency.
Underclocking and undervolting would be attempted on a desktop system to have it operate silently (such as for a home entertainment center) while potentially offering higher performance than currently offered by low-voltage processor offerings. This would use a "standard-voltage" part and attempt to run with lower voltages (while attempting to keep the desktop speeds) to meet an acceptable performance/noise target for the build. This was also attractive as using a "standard voltage" processor in a "low voltage" application avoided paying the traditional price premium for an officially certified low voltage version. However again like overclocking there is no guarantee of success, and the builder's time researching given system/processor combinations and especially the time and tedium of performing many iterations of stability testing need to be considered. The usefulness of underclocking (again like overclocking) is determined by what processor offerings, prices, and availability are at the specific time of the build. Underclocking is also sometimes used when troubleshooting.
Overclocking has become more accessible with motherboard makers offering overclocking as a marketing feature on their mainstream product lines. However, the practice is embraced more by enthusiasts than professional users, as overclocking carries a risk of reduced reliability, accuracy and damage to data and equipment. Additionally, most manufacturer warranties and service agreements do not cover overclocked components nor any incidental damages caused by their use. While overclocking can still be an option for increasing personal computing capacity, and thus workflow productivity for professional users, the importance of stability testing components thoroughly before employing them into a production environment cannot be overstated.
Overclocking offers several draws for overclocking enthusiasts. Overclocking allows testing of components at speeds not currently offered by the manufacturer, or at speeds only officially offered on specialized, higher-priced versions of the product. A general trend in the computing industry is that new technologies tend to debut in the high-end market first, then later trickle down to the performance and mainstream market. If the high-end part only differs by an increased clock speed, an enthusiast can attempt to overclock a mainstream part to simulate the high-end offering. This can give insight on how over-the-horizon technologies will perform before they are officially available on the mainstream market, which can be especially helpful for other users considering if they should plan ahead to purchase or upgrade to the new feature when it is officially released.
Some hobbyists enjoy building, tuning, and "Hot-Rodding" their systems in competitive benchmarking competitions, competing with other like-minded users for high scores in standardized computer benchmark suites. Others will purchase a low-cost model of a component in a given product line, and attempt to overclock that part to match a more expensive model's stock performance. Another approach is overclocking older components to attempt to keep pace with increasing system requirements and extend the useful service life of the older part or at least delay purchase of new hardware solely for performance reasons. Another rationale for overclocking older equipment is even if overclocking stresses equipment to the point of failure earlier, little is lost as it is already depreciated, and would have needed to be replaced in any case. [4]
Technically any component that uses a timer (or clock) to synchronize its internal operations can be overclocked. Most efforts for computer components however focus on specific components, such as, processors (a.k.a. CPU), video cards, motherboard chipsets, and RAM. Most modern processors derive their effective operating speeds by multiplying a base clock (processor bus speed) by an internal multiplier within the processor (the CPU multiplier) to attain their final speed.
Computer processors generally are overclocked by manipulating the CPU multiplier if that option is available, but the processor and other components can also be overclocked by increasing the base speed of the bus clock. Some systems allow additional tuning of other clocks (such as a system clock) that influence the bus clock speed that, again is multiplied by the processor to allow for finer adjustments of the final processor speed.
Most OEM systems do not expose to the user the adjustments needed to change processor clock speed or voltage in the BIOS of the OEM's motherboard, which precludes overclocking (for warranty and support reasons). The same processor installed on a different motherboard offering adjustments will allow the user to change them.
Any given component will ultimately stop operating reliably past a certain clock speed. Components will generally show some sort of malfunctioning behavior or other indication of compromised stability that alerts the user that a given speed is not stable, but there is always a possibility that a component will permanently fail without warning, even if voltages are kept within some pre-determined safe values. The maximum speed is determined by overclocking to the point of first instability, then accepting the last stable slower setting. Components are only guaranteed to operate correctly up to their rated values; beyond that different samples may have different overclocking potential. The end-point of a given overclock is determined by parameters such as available CPU multipliers, bus dividers, voltages; the user's ability to manage thermal loads, cooling techniques; and several other factors of the individual devices themselves such as semiconductor clock and thermal tolerances, interaction with other components and the rest of the system.
There are several things to be considered when overclocking. First is to ensure that the component is supplied with adequate power at a voltage sufficient to operate at the new clock rate. Supplying the power with improper settings or applying excessive voltage can permanently damage a component.
In a professional production environment, overclocking is only likely to be used where the increase in speed justifies the cost of the expert support required, the possibly reduced reliability, the consequent effect on maintenance contracts and warranties, and the higher power consumption. If faster speed is required it is often cheaper when all costs are considered to buy faster hardware.
All electronic circuits produce heat generated by the movement of electric current. As clock frequencies in digital circuits and voltage applied increase, the heat generated by components running at the higher performance levels also increases. The relationship between clock frequencies and thermal design power (TDP) are linear. However, there is a limit to the maximum frequency which is called a "wall". To overcome this issue, overclockers raise the chip voltage to increase the overclocking potential. Voltage increases power consumption and consequently heat generation significantly (proportionally to the square of the voltage in a linear circuit, for example); this requires more cooling to avoid damaging the hardware by overheating. In addition, some digital circuits slow down at high temperatures due to changes in MOSFET device characteristics. Conversely, the overclocker may decide to decrease the chip voltage while overclocking (a process known as undervolting), to reduce heat emissions while performance remains optimal.
Stock cooling systems are designed for the amount of power produced during non-overclocked use; overclocked circuits can require more cooling, such as by powerful fans, larger heat sinks, heat pipes and water cooling. Mass, shape, and material all influence the ability of a heatsink to dissipate heat. Efficient heatsinks are often made entirely of copper, which has high thermal conductivity, but is expensive. [5] Aluminium is more widely used; it has good thermal characteristics, though not as good as copper, and is significantly cheaper. Cheaper materials such as steel do not have good thermal characteristics. Heat pipes can be used to improve conductivity. Many heatsinks combine two or more materials to achieve a balance between performance and cost. [5]
Water cooling carries waste heat to a radiator. Thermoelectric cooling devices which actually refrigerate using the Peltier effect can help with high thermal design power (TDP) processors made by Intel and AMD in the early twenty-first century. Thermoelectric cooling devices create temperature differences between two plates by running an electric current through the plates. This method of cooling is highly effective, but itself generates significant heat elsewhere which must be carried away, often by a convection-based heatsink or a water cooling system.
Other cooling methods are forced convection and phase transition cooling which is used in refrigerators and can be adapted for computer use. Liquid nitrogen, liquid helium, and dry ice are used as coolants in extreme cases, [6] such as record-setting attempts or one-off experiments rather than cooling an everyday system. In June 2006, IBM and Georgia Institute of Technology jointly announced a new record in silicon-based chip clock rate (the rate a transistor can be switched at, not the CPU clock rate [7] ) above 500 GHz, which was done by cooling the chip to 4.5 K (−268.6 °C ; −451.6 °F ) using liquid helium. [8] Set in November 2012, the CPU Frequency World Record is 9008.82 MHz as of December 2022. [9] These extreme methods are generally impractical in the long term, as they require refilling reservoirs of vaporizing coolant, and condensation can form on chilled components. [6] Moreover, silicon-based junction gate field-effect transistors (JFET) will degrade below temperatures of roughly 100 K (−173 °C; −280 °F) and eventually cease to function or "freeze out" at 40 K (−233 °C; −388 °F) since the silicon ceases to be semiconducting, [10] so using extremely cold coolants may cause devices to fail. Blowtorch is used to temporarily raise temperature to issues of over-cooling when not desirable. [11] [12]
Submersion cooling, used by the Cray-2 supercomputer, involves sinking a part of computer system directly into a chilled liquid that is thermally conductive but has low electrical conductivity. The advantage of this technique is that no condensation can form on components. [13] A good submersion liquid is Fluorinert made by 3M, which is expensive. Another option is mineral oil, but impurities such as those in water might cause it to conduct electricity. [13]
Amateur overclocking enthusiasts have used a mixture of dry ice and a solvent with a low freezing point, such as acetone or isopropyl alcohol. [14] This cooling bath, often used in laboratories, achieves a temperature of −78 °C (−108 °F). [15] However, this practice is discouraged due to its safety risks; the solvents are flammable and volatile, and dry ice can cause frostbite (through contact with exposed skin) and suffocation (due to the large volume of carbon dioxide generated when it sublimes).
As an overclocked component operates outside of the manufacturer's recommended operating conditions, it may function incorrectly, leading to system instability. Another risk is silent data corruption by undetected errors. Such failures might never be correctly diagnosed and may instead be incorrectly attributed to software bugs in applications, device drivers, or the operating system. Overclocked use may permanently damage components enough to cause them to misbehave (even under normal operating conditions) without becoming totally unusable.
A large-scale 2011 field study of hardware faults causing a system crash for consumer PCs and laptops showed a four to 20 times increase (depending on CPU manufacturer) in system crashes due to CPU failure for overclocked computers over an eight-month period. [16]
In general, overclockers claim that testing can ensure that an overclocked system is stable and functioning correctly. Although software tools are available for testing hardware stability, it is generally impossible for any private individual to thoroughly test the functionality of a processor. [17] Achieving good fault coverage requires immense engineering effort; even with all of the resources dedicated to validation by manufacturers, faulty components and even design faults are not always detected.
A particular "stress test" can verify only the functionality of the specific instruction sequence used in combination with the data and may not detect faults in those operations. For example, an arithmetic operation may produce the correct result but incorrect flags; if the flags are not checked, the error will go undetected.
To further complicate matters, in process technologies such as silicon on insulator (SOI), devices display hysteresis—a circuit's performance is affected by the events of the past, so without carefully targeted tests it is possible for a particular sequence of state changes to work at overclocked rates in one situation but not another even if the voltage and temperature are the same. Often, an overclocked system which passes stress tests experiences instabilities in other programs. [18]
In overclocking circles, "stress tests" or "torture tests" are used to check for correct operation of a component. These workloads are selected as they put a very high load on the component of interest (e.g. a graphically intensive application for testing video cards, or different math-intensive applications for testing general CPUs). Popular stress tests include Prime95, Superpi, OCCT, AIDA64, Linpack (via the LinX and IntelBurnTest GUIs), SiSoftware Sandra, BOINC, Intel Thermal Analysis Tool and Memtest86. The hope is that any functional-correctness issues with the overclocked component will manifest themselves during these tests, and if no errors are detected during the test, then the component is deemed "stable". Since fault coverage is important in stability testing, the tests are often run for long periods of time, hours or even days. An overclocked computer is sometimes described using the number of hours and the stability program used, such as "prime 12 hours stable".
Overclockability arises in part due to the economics of the manufacturing processes of CPUs and other components. In many cases components are manufactured by the same process, and tested after manufacture to determine their actual maximum ratings. Components are then marked with a rating chosen by the market needs of the semiconductor manufacturer. If manufacturing yield is high, more higher-rated components than required may be produced, and the manufacturer may mark and sell higher-performing components as lower-rated for marketing reasons. In some cases, the true maximum rating of the component may exceed even the highest rated component sold. Many devices sold with a lower rating may behave in all ways as higher-rated ones, while in the worst case operation at the higher rating may be more problematical.
Notably, higher clocks must always mean greater waste heat generation, as semiconductors set to high must dump to ground more often. In some cases, this means that the chief drawback of the overclocked part is far more heat dissipated than the maximums published by the manufacturer. Pentium architect Bob Colwell calls overclocking an "uncontrolled experiment in better-than-worst-case system operation". [19]
Benchmarks are used to evaluate performance, and they can become a kind of "sport" in which users compete for the highest scores. As discussed above, stability and functional correctness may be compromised when overclocking, and meaningful benchmark results depend on the correct execution of the benchmark. Because of this, benchmark scores may be qualified with stability and correctness notes (e.g. an overclocker may report a score, noting that the benchmark only runs to completion 1 in 5 times, or that signs of incorrect execution such as display corruption are visible while running the benchmark). A widely used test of stability is Prime95, which has built-in error checking that fails if the computer is unstable.
Using only the benchmark scores, it may be difficult to judge the difference overclocking makes to the overall performance of a computer. For example, some benchmarks test only one aspect of the system, such as memory bandwidth, without taking into consideration how higher clock rates in this aspect will improve the system performance as a whole. Apart from demanding applications such as video encoding, high-demand databases and scientific computing, memory bandwidth is typically not a bottleneck, so a great increase in memory bandwidth may be unnoticeable to a user depending on the applications used. Other benchmarks, such as 3DMark, attempt to replicate game conditions.
Overclocking is sometimes offered as a legitimate service or feature for consumers, in which a manufacturer or retailer tests the overclocking capability of processors, memory, video cards, and other hardware products. Several video card manufactures now offer factory-overclocked versions of their graphics accelerators, complete with a warranty, usually at a price intermediate between that of the standard product and a non-overclocked product of higher performance.
It is speculated that manufacturers implement overclocking prevention mechanisms such as CPU multiplier locking to prevent users from buying lower-priced items and overclocking them. These measures are sometimes marketed as a consumer protection benefit, but are often criticized by buyers.
Many motherboards are sold, and advertised, with extensive facilities for overclocking implemented in hardware and controlled by BIOS settings. [20]
CPU multiplier locking is the process of permanently setting a CPU's clock multiplier. AMD CPUs are unlocked in early editions of a model and locked in later editions, but nearly all Intel CPUs are locked and recent[ when? ] models are very resistant to unlocking to prevent overclocking by users. AMD ships unlocked CPUs with their Opteron, FX, All Ryzen desktop chips (except 3D variants) and Black Series line-up, while Intel uses the monikers of "Extreme Edition" and "K-Series." Intel usually has one or two Extreme Edition CPUs on the market as well as X series and K series CPUs analogous to AMD's Black Edition. AMD has the majority of their desktop range in a Black Edition.
Users usually unlock CPUs to allow overclocking, but sometimes to allow for underclocking in order to maintain the front side bus speed (on older CPUs) compatibility with certain motherboards. Unlocking generally invalidates the manufacturer's warranty, and mistakes can cripple or destroy a CPU. Locking a chip's clock multiplier does not necessarily prevent users from overclocking, as the speed of the front-side bus or PCI multiplier (on newer CPUs) may still be changed to provide a performance increase. AMD Athlon and Athlon XP CPUs are generally unlocked by connecting bridges (jumper-like points) on the top of the CPU with conductive paint or pencil lead. Other CPU models may require different procedures.
Increasing front-side bus or northbridge/PCI clocks can overclock locked CPUs, but this throws many system frequencies out of sync, since the RAM and PCI frequencies are modified as well.
Contrary to popular belief, the "pin mod" method which claims to unlock older AMD Athlon XP CPUs does not work. All other unlocked processors from LGA1151 and v2 (including 7th, 8th, and 9th generation) and BGA1440 allow for BCLK overclocking (as long as the OEM allows it), while all other locked processors from 7th, 8th, and 9th gen were not able to go past 102.7 MHz. 10th gen however, could reach 103 MHz [21] on the BCLK.
This section possibly contains original research .(December 2011) |
This section possibly contains original research .(December 2011) |
Overclocking components can only be of noticeable benefit if the component is on the critical path for a process, if it is a bottleneck. If disk access or the speed of an Internet connection limit the speed of a process, a 20% increase in processor speed is unlikely to be noticed, however there are some scenarios where increasing the clock speed of a processor actually allows an SSD to be read and written to faster. Overclocking a CPU will not noticeably benefit a game when a graphics card's performance is the "bottleneck" of the game.
Similar to dynamic adjustments critical in network management for handling data flow and preventing bottlenecks, overclocking computer hardware requires ongoing monitoring and adaptations to maintain system stability and performance. In high-performance network systems, researchers like Åkerblom et al. (2023) have developed adaptive methods such as Thompson Sampling to optimize system responses under varying conditions, analogous to technologies used in overclocking like real-time voltage adjustments and adaptive cooling systems. These technologies are crucial in managing the additional heat and power demands imposed by overclocked components, ensuring that the hardware operates within safe temperature and voltage limits to prevent damage and prolong the lifespan of the components[1].
Graphics cards can also be overclocked. There are utilities to achieve this, such as EVGA's Precision, RivaTuner, AMD Overdrive (on AMD cards only), MSI Afterburner, Zotac Firestorm, and the PEG Link Mode on Asus motherboards. Overclocking a GPU will often yield a marked increase in performance in synthetic benchmarks, usually reflected in game performance. [25] It is sometimes possible to see that a graphics card is being pushed beyond its limits before any permanent damage is done by observing on-screen artifacts or unexpected system crashes. It is common to run into one of those problems when overclocking graphics cards; both symptoms at the same time usually means that the card is severely pushed beyond its heat, clock rate, and/or voltage limits, however if seen when not overclocked, they indicate a faulty card. After a reboot, video settings are reset to standard values stored in the graphics card firmware, and the maximum clock rate of that specific card is now deducted.
Some overclockers apply a potentiometer to the graphics card to manually adjust the voltage (which usually invalidates the warranty). This allows for finer adjustments, as overclocking software for graphics cards can only go so far. Excessive voltage increases may damage or destroy components on the graphics card or the entire graphics card itself (practically speaking).
Flashing and unlocking can be used to improve the performance of a video card, without technically overclocking (but is much riskier than overclocking just through software).
Flashing refers to using the firmware of a different card with the same (or sometimes similar) core and compatible firmware, effectively making it a higher model card; it can be difficult, and may be irreversible. Sometimes standalone software to modify the firmware files can be found, e.g. NiBiTor (GeForce 6/7 series are well regarded in this aspect), without using firmware for a better model video card. For example, video cards with 3D accelerators (most, as of 2011 [update] ) have two voltage and clock rate settings, one for 2D and one for 3D, but were designed to operate with three voltage stages, the third being somewhere between the aforementioned two, serving as a fallback when the card overheats or as a middle-stage when going from 2D to 3D operation mode. Therefore, it could be wise to set this middle-stage prior to "serious" overclocking, specifically because of this fallback ability; the card can drop down to this clock rate, reducing by a few (or sometimes a few dozen, depending on the setting) percent of its efficiency and cool down, without dropping out of 3D mode (and afterwards return to the desired high performance clock and voltage settings).
Some cards have abilities not directly connected with overclocking. For example, Nvidia's GeForce 6600GT (AGP flavor) has a temperature monitor used internally by the card, invisible to the user if standard firmware is used. Modifying the firmware can display a 'Temperature' tab.
Unlocking refers to enabling extra pipelines or pixel shaders. The 6800LE, the 6800GS and 6800 (AGP models only) were some of the first cards to benefit from unlocking. While these models have either 8 or 12 pipes enabled, they share the same 16x6 GPU core as a 6800GT or Ultra, but pipelines and shaders beyond those specified are disabled; the GPU may be fully functional, or may have been found to have faults which do not affect operation at the lower specification. GPUs found to be fully functional can be unlocked successfully, although it is not possible to be sure that there are undiscovered faults; in the worst case the card may become permanently unusable.
A motherboard is the main printed circuit board (PCB) in general-purpose computers and other expandable systems. It holds and allows communication between many of the crucial electronic components of a system, such as the central processing unit (CPU) and memory, and provides connectors for other peripherals. Unlike a backplane, a motherboard usually contains significant sub-systems, such as the central processor, the chipset's input/output and memory controllers, interface connectors, and other components integrated for general use.
The front-side bus (FSB) is a computer communication interface (bus) that was often used in Intel-chip-based computers during the 1990s and 2000s. The EV6 bus served the same function for competing AMD CPUs. Both typically carry data between the central processing unit (CPU) and a memory controller hub, known as the northbridge.
Processor power dissipation or processing unit power dissipation is the process in which computer processors consume electrical energy, and dissipate this energy in the form of heat due to the resistance in the electronic circuits.
Underclocking, also known as downclocking, is modifying a computer or electronic circuit's timing settings to run at a lower clock rate than is specified. Underclocking is used to reduce a computer's power consumption, increase battery life, reduce heat emission, and it may also increase the system's stability, lifespan/reliability and compatibility. Underclocking may be implemented by the factory, but many computers and components may be underclocked by the end user. Underclocking is the opposite of overclocking.
In computing, the clock rate or clock speed typically refers to the frequency at which the clock generator of a processor can generate pulses, which are used to synchronize the operations of its components, and is used as an indicator of the processor's speed. It is measured in the SI unit of frequency hertz (Hz).
A quiet, silent or fanless PC is a personal computer that makes very little or no noise. Common uses for quiet PCs include video editing, sound mixing and home theater PCs, but noise reduction techniques can also be used to greatly reduce the noise from servers. There is currently no standard definition for a "quiet PC", and the term is generally not used in a business context, but by individuals and the businesses catering to them.
In computing, a northbridge is a microchip that comprises the core logic chipset architecture on motherboards to handle high-performance tasks, especially for older personal computers. It is connected directly to a CPU via the front-side bus (FSB), and is usually used in conjunction with a slower southbridge to manage communication between the CPU and other parts of the motherboard.
Computer cooling is required to remove the waste heat produced by computer components, to keep components within permissible operating temperature limits. Components that are susceptible to temporary malfunction or permanent failure if overheated include integrated circuits such as central processing units (CPUs), chipsets, graphics cards, hard disk drives, and solid state drives.
A voltage regulator module (VRM), sometimes called processor power module (PPM), is a buck converter that provides the microprocessor and chipset the appropriate supply voltage, converting +3.3 V, +5 V or +12 V to lower voltages required by the devices, allowing devices with different supply voltages be mounted on the same motherboard. On personal computer (PC) systems, the VRM is typically made up of power MOSFET devices.
In computing, the clock multiplier sets the ratio of an internal CPU clock rate to the externally supplied clock. This may be implemented with phase-locked loop (PLL) frequency multiplier circuitry. A CPU with a 10x multiplier will thus see 10 internal cycles for every external clock cycle. For example, a system with an external clock of 100 MHz and a 36x clock multiplier will have an internal CPU clock of 3.6 GHz. The external address and data buses of the CPU also use the external clock as a fundamental timing base; however, they could also employ a (small) multiple of this base frequency to transfer data faster.
Sandy Bridge is the codename for Intel's 32 nm microarchitecture used in the second generation of the Intel Core processors. The Sandy Bridge microarchitecture is the successor to Nehalem and Westmere microarchitecture. Intel demonstrated an A1 stepping Sandy Bridge processor in 2009 during Intel Developer Forum (IDF), and released first products based on the architecture in January 2011 under the Core brand.
In computing, computer performance is the amount of useful work accomplished by a computer system. Outside of specific contexts, computer performance is estimated in terms of accuracy, efficiency and speed of executing computer program instructions. When it comes to high computer performance, one or more of the following factors might be involved:
Dynamic frequency scaling is a power management technique in computer architecture whereby the frequency of a microprocessor can be automatically adjusted "on the fly" depending on the actual needs, to conserve power and reduce the amount of heat generated by the chip. Dynamic frequency scaling helps preserve battery on mobile devices and decrease cooling cost and noise on quiet computing settings, or can be useful as a security measure for overheated systems.
In computer architecture, dynamic voltage scaling is a power management technique in which the voltage used in a component is increased or decreased, depending upon circumstances. Dynamic voltage scaling to increase voltage is known as overvolting; dynamic voltage scaling to decrease voltage is known as undervolting. Undervolting is done in order to conserve power, particularly in laptops and other mobile devices, where energy comes from a battery and thus is limited, or in rare cases, to increase reliability. Overvolting is done in order to support higher frequencies for performance.
A memory divider is a ratio which is used to determine the operating clock frequency of computer memory in accordance with front side bus (FSB) frequency, if the memory system is dependent on FSB clock speed. Along with memory latency timings, memory dividers are extensively used in overclocking memory subsystems to find stable, working memory states at higher FSB frequencies. The ratio between DRAM and FSB is commonly referred to as "DRAM:FSB ratio".
Product binning is the categorizing of finished products based on their characteristics. Any mining, harvesting, or manufacturing process will yield products spanning a range of quality and desirability in the marketplace. Binning allows differing quality products to be priced appropriately for various uses and markets.
Phenom II is a family of AMD's multi-core 45 nm processors using the AMD K10 microarchitecture, succeeding the original Phenom. Advanced Micro Devices released the Socket AM2+ version of Phenom II in December 2008, while Socket AM3 versions with DDR3 support, along with an initial batch of triple- and quad-core processors were released on February 9, 2009. Dual-processor systems require Socket F+ for the Quad FX platform. The next-generation Phenom II X6 was released on April 27, 2010.
Computer hardware includes the physical parts of a computer, such as the central processing unit (CPU), random access memory (RAM), motherboard, computer data storage, graphics card, sound card, and computer case. It includes external devices such as a monitor, mouse, keyboard, and speakers.
Bloomfield is the code name for Intel high-end desktop processors sold as Core i7-9xx and single-processor servers sold as Xeon 35xx., in almost identical configurations, replacing the earlier Yorkfield processors. The Bloomfield core is closely related to the dual-processor Gainestown, which has the same CPUID value of 0106Ax and which uses the same socket. Bloomfield uses a different socket than the later Lynnfield and Clarksfield processors based on the same 45 nm Nehalem microarchitecture, even though some of these share the same Intel Core i7 brand.
A stress test of hardware is a form of deliberately intense and thorough testing used to determine the stability of a given system or entity. It involves testing beyond normal operational capacity, often to a breaking point, in order to observe the results.
{{cite journal}}
: Cite journal requires |journal=
(help)