Multi-core processor

Last updated
Diagram of a generic dual-core processor with CPU-local level-1 caches and a shared, on-die level-2 cache Dual Core Generic.svg
Diagram of a generic dual-core processor with CPU-local level-1 caches and a shared, on-die level-2 cache
An Intel Core 2 Duo E6750 dual-core processor E6750bs8.jpg
An Intel Core 2 Duo E6750 dual-core processor
An AMD Athlon X2 6400+ dual-core processor Athlon64x2-6400plus.jpg
An AMD Athlon X2 6400+ dual-core processor

A multi-core processor is a microprocessor on a single integrated circuit with two or more separate processing units, called cores (for example, dual-core or quad-core), each of which reads and executes program instructions. [1] The instructions are ordinary CPU instructions (such as add, move data, and branch) but the single processor can run instructions on separate cores at the same time, increasing overall speed for programs that support multithreading or other parallel computing techniques. [2] Manufacturers typically integrate the cores onto a single integrated circuit die (known as a chip multiprocessor or CMP) or onto multiple dies in a single chip package. The microprocessors currently used in almost all personal computers are multi-core.

Contents

A multi-core processor implements multiprocessing in a single physical package. Designers may couple cores in a multi-core device tightly or loosely. For example, cores may or may not share caches, and they may implement message passing or shared-memory inter-core communication methods. Common network topologies used to interconnect cores include bus, ring, two-dimensional mesh, and crossbar. Homogeneous multi-core systems include only identical cores; heterogeneous multi-core systems have cores that are not identical (e.g. big.LITTLE have heterogeneous cores that share the same instruction set, while AMD Accelerated Processing Units have cores that do not share the same instruction set). Just as with single-processor systems, cores in multi-core systems may implement architectures such as VLIW, superscalar, vector, or multithreading.

Multi-core processors are widely used across many application domains, including general-purpose, embedded, network, digital signal processing (DSP), and graphics (GPU). Core count goes up to even dozens, and for specialized chips over 10,000, [3] and in supercomputers (i.e. clusters of chips) the count can go over 10 million (and in one case up to 20 million processing elements total in addition to host processors). [4]

The improvement in performance gained by the use of a multi-core processor depends very much on the software algorithms used and their implementation. In particular, possible gains are limited by the fraction of the software that can run in parallel simultaneously on multiple cores; this effect is described by Amdahl's law. In the best case, so-called embarrassingly parallel problems may realize speedup factors near the number of cores, or even more if the problem is split up enough to fit within each core's cache(s), avoiding use of much slower main-system memory. Most applications, however, are not accelerated as much unless programmers invest effort in refactoring. [5]

The parallelization of software is a significant ongoing topic of research. Cointegration of multiprocessor applications provides flexibility in network architecture design. Adaptability within parallel models is an additional feature of systems utilizing these protocols. [6]

In the consumer market, dual-core processors (that is, microprocessors with two units) started becoming commonplace in the late 2000s. [7] Quad-core processors were also being adopted for higher-end systems. In the late 2010s, hexa-core (six cores) started entering the mainstream. [8]

Terminology

The terms multi-core and dual-core most commonly refer to some sort of central processing unit (CPU), but are sometimes also applied to digital signal processors (DSP) and system on a chip (SoC). The terms are generally used only to refer to multi-core microprocessors that are manufactured on the same integrated circuit die; separate microprocessor dies in the same package are generally referred to by another name, such as multi-chip module . This article uses the terms "multi-core" and "dual-core" for CPUs manufactured on the same integrated circuit, unless otherwise noted.

In contrast to multi-core systems, the term multi-CPU refers to multiple physically separate processing-units (which often contain special circuitry to facilitate communication between each other).

The terms many-core and massively multi-core are sometimes used to describe multi-core architectures with an especially high number of cores (tens to thousands [9] ). [10]

Some systems use many soft microprocessor cores placed on a single FPGA. Each "core" can be considered a "semiconductor intellectual property core" as well as a CPU core.[ citation needed ]

Development

While manufacturing technology improves, reducing the size of individual gates, physical limits of semiconductor-based microelectronics have become a major design concern. These physical limitations can cause significant heat dissipation and data synchronization problems. Various other methods are used to improve CPU performance. Some instruction-level parallelism (ILP) methods such as superscalar pipelining are suitable for many applications, but are inefficient for others that contain difficult-to-predict code. Many applications are better suited to thread-level parallelism (TLP) methods, and multiple independent CPUs are commonly used to increase a system's overall TLP. A combination of increased available space (due to refined manufacturing processes) and the demand for increased TLP led to the development of multi-core CPUs.

Commercial incentives

Several business motives drive the development of multi-core architectures. For decades, it was possible to improve performance of a CPU by shrinking the area of the integrated circuit (IC), which reduced the cost per device on the IC. Alternatively, for the same circuit area, more transistors could be used in the design, which increased functionality, especially for complex instruction set computing (CISC) architectures. Clock rates also increased by orders of magnitude in the decades of the late 20th century, from several megahertz in the 1980s to several gigahertz in the early 2000s.

As the rate of clock speed improvements slowed, increased use of parallel computing in the form of multi-core processors has been pursued to improve overall processing performance. Multiple cores were used on the same CPU chip, which could then lead to better sales of CPU chips with two or more cores. For example, Intel has produced a 48-core processor for research in cloud computing; each core has an x86 architecture. [11] [12]

Technical factors

Since computer manufacturers have long implemented symmetric multiprocessing (SMP) designs using discrete CPUs, the issues regarding implementing multi-core processor architecture and supporting it with software are well known.

Additionally:

In order to continue delivering regular performance improvements for general-purpose processors, manufacturers such as Intel and AMD have turned to multi-core designs, sacrificing lower manufacturing-costs for higher performance in some applications and systems. Multi-core architectures are being developed, but so are the alternatives. An especially strong contender for established markets is the further integration of peripheral functions into the chip.

Advantages

The proximity of multiple CPU cores on the same die allows the cache coherency circuitry to operate at a much higher clock rate than what is possible if the signals have to travel off-chip. Combining equivalent CPUs on a single die significantly improves the performance of cache snoop (alternative: Bus snooping) operations. Put simply, this means that signals between different CPUs travel shorter distances, and therefore those signals degrade less. These higher-quality signals allow more data to be sent in a given time period, since individual signals can be shorter and do not need to be repeated as often.

Assuming that the die can physically fit into the package, multi-core CPU designs require much less printed circuit board (PCB) space than do multi-chip SMP designs. Also, a dual-core processor uses slightly less power than two coupled single-core processors, principally because of the decreased power required to drive signals external to the chip. Furthermore, the cores share some circuitry, like the L2 cache and the interface to the front-side bus (FSB). In terms of competing technologies for the available silicon die area, multi-core design can make use of proven CPU core library designs and produce a product with lower risk of design error than devising a new wider-core design. Also, adding more cache suffers from diminishing returns.

Multi-core chips also allow higher performance at lower energy. This can be a big factor in mobile devices that operate on batteries. Since each core in a multi-core CPU is generally more energy-efficient, the chip becomes more efficient than having a single large monolithic core. This allows higher performance with less energy. A challenge in this, however, is the additional overhead of writing parallel code. [14]

Disadvantages

Maximizing the usage of the computing resources provided by multi-core processors requires adjustments both to the operating system (OS) support and to existing application software. Also, the ability of multi-core processors to increase application performance depends on the use of multiple threads within applications.

Integration of a multi-core chip can lower the chip production yields. They are also more difficult to manage thermally than lower-density single-core designs. Intel has partially countered this first problem by creating its quad-core designs by combining two dual-core ones on a single die with a unified cache, hence any two working dual-core dies can be used, as opposed to producing four cores on a single die and requiring all four to work to produce a quad-core CPU. From an architectural point of view, ultimately, single CPU designs may make better use of the silicon surface area than multiprocessing cores, so a development commitment to this architecture may carry the risk of obsolescence. Finally, raw processing power is not the only constraint on system performance. Two processing cores sharing the same system bus and memory bandwidth limits the real-world performance advantage. In a 2009 report, Dr Jun Ni showed that if a single core is close to being memory-bandwidth limited, then going to dual-core might give 30% to 70% improvement; if memory bandwidth is not a problem, then a 90% improvement can be expected; however, Amdahl's law makes this claim dubious. [15] It would be possible for an application that used two CPUs to end up running faster on a single-core one if communication between the CPUs was the limiting factor, which would count as more than 100% improvement.

Hardware

The trend in processor development has been towards an ever-increasing number of cores, as processors with hundreds or even thousands of cores become theoretically possible. [16] In addition, multi-core chips mixed with simultaneous multithreading, memory-on-chip, and special-purpose "heterogeneous" (or asymmetric) cores promise further performance and efficiency gains, especially in processing multimedia, recognition and networking applications. For example, a big.LITTLE core includes a high-performance core (called 'big') and a low-power core (called 'LITTLE'). There is also a trend towards improving energy-efficiency by focusing on performance-per-watt with advanced fine-grain or ultra fine-grain power management and dynamic voltage and frequency scaling (i.e. laptop computers and portable media players).

Chips designed from the outset for a large number of cores (rather than having evolved from single core designs) are sometimes referred to as manycore designs, emphasising qualitative differences.

Architecture

The composition and balance of the cores in multi-core architecture show great variety. Some architectures use one core design repeated consistently ("homogeneous"), while others use a mixture of different cores, each optimized for a different, "heterogeneous" role.

How multiple cores are implemented and integrated significantly affects both the developer's programming skills and the consumer's expectations of apps and interactivity versus the device. [17] A device advertised as being octa-core will only have independent cores if advertised as True Octa-core, or similar styling, as opposed to being merely two sets of quad-cores each with fixed clock speeds. [18] [19]

The article "CPU designers debate multi-core future" by Rick Merritt, EE Times 2008, [20] includes these comments:

Chuck Moore [...] suggested computers should be like cellphones, using a variety of specialty cores to run modular software scheduled by a high-level applications programming interface.

[...] Atsushi Hasegawa, a senior chief engineer at Renesas, generally agreed. He suggested the cellphone's use of many specialty cores working in concert is a good model for future multi-core designs.

[...] Anant Agarwal, founder and chief executive of startup Tilera, took the opposing view. He said multi-core chips need to be homogeneous collections of general-purpose cores to keep the software model simple.

Software effects

An outdated version of an anti-virus application may create a new thread for a scan process, while its GUI thread waits for commands from the user (e.g. cancel the scan). In such cases, a multi-core architecture is of little benefit for the application itself due to the single thread doing all the heavy lifting and the inability to balance the work evenly across multiple cores. Programming truly multithreaded code often requires complex co-ordination of threads and can easily introduce subtle and difficult-to-find bugs due to the interweaving of processing on data shared between threads (see thread-safety). Consequently, such code is much more difficult to debug than single-threaded code when it breaks. There has been a perceived lack of motivation for writing consumer-level threaded applications because of the relative rarity of consumer-level demand for maximum use of computer hardware. Also, serial tasks like decoding the entropy encoding algorithms used in video codecs are impossible to parallelize because each result generated is used to help create the next result of the entropy decoding algorithm.

Given the increasing emphasis on multi-core chip design, stemming from the grave thermal and power consumption problems posed by any further significant increase in processor clock speeds, the extent to which software can be multithreaded to take advantage of these new chips is likely to be the single greatest constraint on computer performance in the future. If developers are unable to design software to fully exploit the resources provided by multiple cores, then they will ultimately reach an insurmountable performance ceiling.

The telecommunications market had been one of the first that needed a new design of parallel datapath packet processing because there was a very quick adoption of these multiple-core processors for the datapath and the control plane. These MPUs are going to replace [21] the traditional Network Processors that were based on proprietary microcode or picocode.

Parallel programming techniques can benefit from multiple cores directly. Some existing parallel programming models such as Cilk Plus, OpenMP, OpenHMPP, FastFlow, Skandium, MPI, and Erlang can be used on multi-core platforms. Intel introduced a new abstraction for C++ parallelism called TBB. Other research efforts include the Codeplay Sieve System, Cray's Chapel, Sun's Fortress, and IBM's X10.

Multi-core processing has also affected the ability of modern computational software development. Developers programming in newer languages might find that their modern languages do not support multi-core functionality. This then requires the use of numerical libraries to access code written in languages like C and Fortran, which perform math computations faster[ citation needed ] than newer languages like C#. Intel's MKL and AMD's ACML are written in these native languages and take advantage of multi-core processing. Balancing the application workload across processors can be problematic, especially if they have different performance characteristics. There are different conceptual models to deal with the problem, for example using a coordination language and program building blocks (programming libraries or higher-order functions). Each block can have a different native implementation for each processor type. Users simply program using these abstractions and an intelligent compiler chooses the best implementation based on the context. [22]

Managing concurrency acquires a central role in developing parallel applications. The basic steps in designing parallel applications are:

Partitioning
The partitioning stage of a design is intended to expose opportunities for parallel execution. Hence, the focus is on defining a large number of small tasks in order to yield what is termed a fine-grained decomposition of a problem.
Communication
The tasks generated by a partition are intended to execute concurrently but cannot, in general, execute independently. The computation to be performed in one task will typically require data associated with another task. Data must then be transferred between tasks so as to allow computation to proceed. This information flow is specified in the communication phase of a design.
Agglomeration
In the third stage, development moves from the abstract toward the concrete. Developers revisit decisions made in the partitioning and communication phases with a view to obtaining an algorithm that will execute efficiently on some class of parallel computer. In particular, developers consider whether it is useful to combine, or agglomerate, tasks identified by the partitioning phase, so as to provide a smaller number of tasks, each of greater size. They also determine whether it is worthwhile to replicate data and computation.
Mapping
In the fourth and final stage of the design of parallel algorithms, the developers specify where each task is to execute. This mapping problem does not arise on uniprocessors or on shared-memory computers that provide automatic task scheduling.

On the other hand, on the server side, multi-core processors are ideal because they allow many users to connect to a site simultaneously and have independent threads of execution. This allows for Web servers and application servers that have much better throughput.

Licensing

Vendors may license some software "per processor". This can give rise to ambiguity, because a "processor" may consist either of a single core or of a combination of cores.

Embedded applications

An embedded system on a plug-in card with processor, memory, power supply, and external interfaces DHCOM Computer On Module - AM35x.jpg
An embedded system on a plug-in card with processor, memory, power supply, and external interfaces

Embedded computing operates in an area of processor technology distinct from that of "mainstream" PCs. The same technological drives towards multi-core apply here too. Indeed, in many cases the application is a "natural" fit for multi-core technologies, if the task can easily be partitioned between the different processors.

In addition, embedded software is typically developed for a specific hardware release, making issues of software portability, legacy code or supporting independent developers less critical than is the case for PC or enterprise computing. As a result, it is easier for developers to adopt new technologies and as a result there is a greater variety of multi-core processing architectures and suppliers.

Network processors

As of 2010, multi-core network processors have become mainstream, with companies such as Freescale Semiconductor, Cavium Networks, Wintegra and Broadcom all manufacturing products with eight processors. For the system developer, a key challenge is how to exploit all the cores in these devices to achieve maximum networking performance at the system level, despite the performance limitations inherent in a symmetric multiprocessing (SMP) operating system. Companies such as 6WIND provide portable packet processing software designed so that the networking data plane runs in a fast path environment outside the operating system of the network device. [25]

Digital signal processing

In digital signal processing the same trend applies: Texas Instruments has the three-core TMS320C6488 and four-core TMS320C5441, Freescale the four-core MSC8144 and six-core MSC8156 (and both have stated they are working on eight-core successors). Newer entries include the Storm-1 family from Stream Processors, Inc with 40 and 80 general purpose ALUs per chip, all programmable in C as a SIMD engine and Picochip with 300 processors on a single die, focused on communication applications.

Heterogeneous systems

In heterogeneous computing, where a system uses more than one kind of processor or cores, multi-core solutions are becoming more common: Xilinx Zynq UltraScale+ MPSoC has a quad-core ARM Cortex-A53 and dual-core ARM Cortex-R5. Software solutions such as OpenAMP are being used to help with inter-processor communication.

Mobile devices may use the ARM big.LITTLE architecture.

Hardware examples

Commercial

Free

Academic

Benchmarks

The research and development of multicore processors often compares many options, and benchmarks are developed to help such evaluations. Existing benchmarks include SPLASH-2, PARSEC, and COSMIC for heterogeneous systems. [49]

See also

Notes

  1. ^ Digital signal processors (DSPs) have used multi-core architectures for much longer than high-end general-purpose processors. A typical example of a DSP-specific implementation would be a combination of a RISC CPU and a DSP MPU. This allows for the design of products that require a general-purpose processor for user interfaces and a DSP for real-time data processing; this type of design is common in mobile phones. In other applications, a growing number of companies have developed multi-core DSPs with very large numbers of processors.
  2. ^ Two types of operating systems are able to use a dual-CPU multiprocessor: partitioned multiprocessing and symmetric multiprocessing (SMP). In a partitioned architecture, each CPU boots into separate segments of physical memory and operate independently; in an SMP OS, processors work in a shared space, executing threads within the OS independently.

Related Research Articles

<span class="mw-page-title-main">Central processing unit</span> Central computer component which executes instructions

A central processing unit (CPU)—also called a central processor or main processor—is the most important processor in a given computer. Its electronic circuitry executes instructions of a computer program, such as arithmetic, logic, controlling, and input/output (I/O) operations. This role contrasts with that of external components, such as main memory and I/O circuitry, and specialized coprocessors such as graphics processing units (GPUs).

<span class="mw-page-title-main">Itanium</span> Family of 64-bit Intel microprocessors

Itanium is a discontinued family of 64-bit Intel microprocessors that implement the Intel Itanium architecture. The Itanium architecture originated at Hewlett-Packard (HP), and was later jointly developed by HP and Intel. Launched in June 2001, Intel initially marketed the processors for enterprise servers and high-performance computing systems. In the concept phase, engineers said "we could run circles around PowerPC...we could kill the x86." Early predictions were that IA-64 would expand to the lower-end servers, supplanting Xeon, and eventually penetrate into the personal computers, eventually to supplant reduced instruction set computing (RISC) and complex instruction set computing (CISC) architectures for all general-purpose applications.

<span class="mw-page-title-main">Microprocessor</span> Computer processor contained on an integrated-circuit chip

A microprocessor is a computer processor for which the data processing logic and control is included on a single integrated circuit (IC), or a small number of ICs. The microprocessor contains the arithmetic, logic, and control circuitry required to perform the functions of a computer's central processing unit (CPU). The IC is capable of interpreting and executing program instructions and performing arithmetic operations. The microprocessor is a multipurpose, clock-driven, register-based, digital integrated circuit that accepts binary data as input, processes it according to instructions stored in its memory, and provides results as output. Microprocessors contain both combinational logic and sequential digital logic, and operate on numbers and symbols represented in the binary number system.

<span class="mw-page-title-main">Symmetric multiprocessing</span> The equal sharing of all resources by multiple identical processors

Symmetric multiprocessing or shared-memory multiprocessing (SMP) involves a multiprocessor computer hardware and software architecture where two or more identical processors are connected to a single, shared main memory, have full access to all input and output devices, and are controlled by a single operating system instance that treats all processors equally, reserving none for special purposes. Most multiprocessor systems today use an SMP architecture. In the case of multi-core processors, the SMP architecture applies to the cores, treating them as separate processors.

<span class="mw-page-title-main">Hyper-threading</span> Proprietary simultaneous multithreading implementation by Intel

Hyper-threading is Intel's proprietary simultaneous multithreading (SMT) implementation used to improve parallelization of computations performed on x86 microprocessors. It was introduced on Xeon server processors in February 2002 and on Pentium 4 desktop processors in November 2002. Since then, Intel has included this technology in Itanium, Atom, and Core 'i' Series CPUs, among others.

<span class="mw-page-title-main">Opteron</span> Server and workstation processor line by AMD

Opteron is AMD's x86 former server and workstation processor line, and was the first processor which supported the AMD64 instruction set architecture. It was released on April 22, 2003, with the SledgeHammer core (K8) and was intended to compete in the server and workstation markets, particularly in the same segment as the Intel Xeon processor. Processors based on the AMD K10 microarchitecture were announced on September 10, 2007, featuring a new quad-core configuration. The last released Opteron CPUs are the Piledriver-based Opteron 4300 and 6300 series processors, codenamed "Seoul" and "Abu Dhabi" respectively.

<span class="mw-page-title-main">Xeon</span> Line of Intel server and workstation processors

Xeon is a brand of x86 microprocessors designed, manufactured, and marketed by Intel, targeted at the non-consumer workstation, server, and embedded markets. It was introduced in June 1998. Xeon processors are based on the same architecture as regular desktop-grade CPUs, but have advanced features such as support for error correction code (ECC) memory, higher core counts, more PCI Express lanes, support for larger amounts of RAM, larger cache memory and extra provision for enterprise-grade reliability, availability and serviceability (RAS) features responsible for handling hardware exceptions through the Machine Check Architecture (MCA). They are often capable of safely continuing execution where a normal processor cannot due to these extra RAS features, depending on the type and severity of the machine-check exception (MCE). Some also support multi-socket systems with two, four, or eight sockets through use of the Ultra Path Interconnect (UPI) bus.

Simultaneous multithreading (SMT) is a technique for improving the overall efficiency of superscalar CPUs with hardware multithreading. SMT permits multiple independent threads of execution to better use the resources provided by modern processor architectures.

<span class="mw-page-title-main">Coprocessor</span> Type of computer processor

A coprocessor is a computer processor used to supplement the functions of the primary processor. Operations performed by the coprocessor may be floating-point arithmetic, graphics, signal processing, string processing, cryptography or I/O interfacing with peripheral devices. By offloading processor-intensive tasks from the main processor, coprocessors can accelerate system performance. Coprocessors allow a line of computers to be customized, so that customers who do not need the extra performance do not need to pay for it.

<span class="mw-page-title-main">Pentium D</span> Family of Intel microprocessors

Pentium D is a range of desktop 64-bit x86-64 processors based on the NetBurst microarchitecture, which is the dual-core variant of the Pentium 4 manufactured by Intel. Each CPU comprised two cores. The brand's first processor, codenamed Smithfield and manufactured on the 90 nm process, was released on May 25, 2005, followed by the 65 nm Presler nine months later. The core implementation on the 90 nm "Smithfield" and later 65 nm "Presler" are designed differently but are functionally the same. The 90 nm "Smithfield" contains a single die, with two adjoined but functionally separate CPU cores cut from the same wafer. The later 65 nm "Presler" utilized a multi-chip module package, where two discrete dies each containing a single core reside on the CPU substrate. Neither the 90nm "Smithfield" nor the 65 nm "Presler" were capable of direct core to core communication, relying instead on the northbridge link to send information between the 2 cores.

In the fields of digital electronics and computer hardware, multi-channel memory architecture is a technology that increases the data transfer rate between the DRAM memory and the memory controller by adding more channels of communication between them. Theoretically, this multiplies the data rate by exactly the number of channels present. Dual-channel memory employs two channels. The technique goes back as far as the 1960s having been used in IBM System/360 Model 91 and in CDC 6600.

<span class="mw-page-title-main">P6 (microarchitecture)</span> Intel processor microarchitecture

The P6 microarchitecture is the sixth-generation Intel x86 microarchitecture, implemented by the Pentium Pro microprocessor that was introduced in November 1995. It is frequently referred to as i686. It was planned to be succeeded by the NetBurst microarchitecture used by the Pentium 4 in 2000, but was revived for the Pentium M line of microprocessors. The successor to the Pentium M variant of the P6 microarchitecture is the Core microarchitecture which in turn is also derived from P6.

The Intel Core microarchitecture is a multi-core processor microarchitecture launched by Intel in mid-2006. It is a major evolution over the Yonah, the previous iteration of the P6 microarchitecture series which started in 1995 with Pentium Pro. It also replaced the NetBurst microarchitecture, which suffered from high power consumption and heat intensity due to an inefficient pipeline designed for high clock rate. In early 2004 the new version of NetBurst (Prescott) needed very high power to reach the clocks it needed for competitive performance, making it unsuitable for the shift to dual/multi-core CPUs. On May 7, 2004 Intel confirmed the cancellation of the next NetBurst, Tejas and Jayhawk. Intel had been developing Merom, the 64-bit evolution of the Pentium M, since 2001, and decided to expand it to all market segments, replacing NetBurst in desktop computers and servers. It inherited from Pentium M the choice of a short and efficient pipeline, delivering superior performance despite not reaching the high clocks of NetBurst.

<span class="mw-page-title-main">Pentium</span> Brand of semi-discontinued microprocessors produced by Intel

Pentium is a semi-discontinued series of x86 architecture-compatible microprocessors produced by Intel. The original Pentium was first released on March 22, 1993. The name "Pentium" is originally derived from the Greek word pente (πεντε), meaning "five", a reference to the prior numeric naming convention of Intel's 80x86 processors (8086–80486), with the Latin ending -ium since the processor would otherwise have been named 80586 using that convention.

<span class="mw-page-title-main">QorIQ</span> Microprocessor range

QorIQ is a brand of ARM-based and Power ISA–based communications microprocessors from NXP Semiconductors. It is the evolutionary step from the PowerQUICC platform, and initial products were built around one or more e500mc cores and came in five different product platforms, P1, P2, P3, P4, and P5, segmented by performance and functionality. The platform keeps software compatibility with older PowerPC products such as the PowerQUICC platform. In 2012 Freescale announced ARM-based QorIQ offerings beginning in 2013.

Manycore processors are special kinds of multi-core processors designed for a high degree of parallel processing, containing numerous simpler, independent processor cores. Manycore processors are used extensively in embedded computers and high-performance computing.

<span class="mw-page-title-main">Kentsfield (microprocessor)</span>

Kentsfield is the code name of the first Intel desktop Core 2 Quad and quad-core Xeon CPUs, released on November 2, 2006. The top-of-the-line Kentsfields were Core 2 Extreme models numbered QX6x00, while the mainstream Core 2 Quad models were numbered Q6x00. All of them featured two 8 MiB L2 cache. The mainstream 65 nanometer Core 2 Quad Q6600, clocked at 2.4 GHz, was launched on January 8, 2007 at US$851. July 22, 2007 marked the release of the Core 2 Quad Q6700 and Core 2 Extreme QX6850 Kentsfields at US$530 and US$999 respectively; the price of the Q6600 was later reduced to US$266. Both Kentsfield and Kentsfield XE use product code 80562.

<span class="mw-page-title-main">Xeon Phi</span> Series of x86 manycore processors from Intel

Xeon Phi was a series of x86 manycore processors designed and made by Intel. It was intended for use in supercomputers, servers, and high-end workstations. Its architecture allowed use of standard programming languages and application programming interfaces (APIs) such as OpenMP.

<span class="mw-page-title-main">Broadwell (microarchitecture)</span> Fifth generation of Intel Core processors

Broadwell is the fifth generation of the Intel Core processor. It is Intel's codename for the 14 nanometer die shrink of its Haswell microarchitecture. It is a "tick" in Intel's tick–tock principle as the next step in semiconductor fabrication. Like some of the previous tick-tock iterations, Broadwell did not completely replace the full range of CPUs from the previous microarchitecture (Haswell), as there were no low-end desktop CPUs based on Broadwell.

Heterogeneous computing refers to systems that use more than one kind of processor or core. These systems gain performance or energy efficiency not just by adding the same type of processors, but by adding dissimilar coprocessors, usually incorporating specialized processing capabilities to handle particular tasks.

References

  1. Rouse, Margaret (March 27, 2007). "Definition: multi-core processor". TechTarget. Archived from the original on August 5, 2010. Retrieved March 6, 2013.
  2. Schauer, Bryan. "Multicore Processors – A Necessity" (PDF). Archived from the original (PDF) on 2011-11-25.
  3. 1 2 Smith, Ryan. "NVIDIA Announces the GeForce RTX 30 Series: Ampere For Gaming, Starting With RTX 3080 & RTX 3090". www.anandtech.com. Retrieved 2020-09-15.
  4. "Sunway TaihuLight - Sunway MPP, Sunway SW26010 260C 1.45GHz, Sunway | TOP500". www.top500.org. Retrieved 2020-09-15.
  5. Suleman, Aater (May 20, 2011). "What makes parallel programming hard?". FutureChips. Archived from the original on May 29, 2011. Retrieved March 6, 2013.
  6. Duran, A (2011). "Ompss: a proposal for programming heterogeneous multi-core architectures". Parallel Processing Letters. 21 (2): 173–193. doi:10.1142/S0129626411000151.
  7. "Definition of dual core". PCMAG. Retrieved 2023-10-27.
  8. "Intel taking its six-core processors mainstream in 2018 with Coffee Lake family". ZDNET. Retrieved 2023-10-27.
  9. Schor, David (November 2017). "The 2,048-core PEZY-SC2 sets a Green500 record". WikiChip.
  10. Vajda, András (2011-06-10). Programming Many-Core Chips. Springer. p. 3. ISBN   978-1-4419-9739-5.
  11. Shrout, Ryan (December 2, 2009). "Intel Shows 48-core x86 Processor as Single-chip Cloud Computer". Archived from the original on January 5, 2016. Retrieved May 17, 2015.
  12. "Intel unveils 48-core cloud computing silicon chip". BBC. December 3, 2009. Archived from the original on December 6, 2012. Retrieved March 6, 2013.
  13. Patterson, David A. "Future of computer architecture." Berkeley EECS Annual Research Symposium (BEARS), College of Engineering, UC Berkeley, US. 2006.
  14. Suleman, Aater (May 19, 2011). "Q & A: Do multicores save energy? Not really". Archived from the original on December 16, 2012. Retrieved March 6, 2013.
  15. Ni, Jun. "Enabling Technology of Multi-core Computing for Medical Imaging" (PDF). Archived from the original (PDF) on 2010-07-05. Retrieved 17 February 2013.
  16. Clark, Jack. "Intel: Why a 1,000-core chip is feasible". ZDNet. Archived from the original on 6 August 2015. Retrieved 6 August 2015.
  17. Kudikala, Chakri (Aug 27, 2016). "These 5 Myths About the Octa-Core Phones Are Actually True". Giz Bot.
  18. "MediaTek Launches MT6592 True Octa-Core Mobile Platform" (Press release). MediaTek. November 20, 2013. Archived from the original on October 29, 2020.
  19. "What is an Octa-core processor". Samsung. Galaxy smartphones run on either Octa-core (2.3GHz Quad + 1.6GHz Quad) or Quad-core (2.15GHz + 1.6GHz Dual) processors
  20. Merritt, Rick (February 6, 2008). "CPU designers debate multi-core future". EE Times . Retrieved October 21, 2023.
  21. "Multicore Packet Processing Forum". Archived from the original on 2009-12-21.
  22. John Darlinton; Moustafa Ghanem; Yike Guo; Hing Wing To (1996). "Guided Resource Organisation in Heterogeneous Parallel Computing". Journal of High Performance Computing. 4 (1): 13–23. CiteSeerX   10.1.1.37.4309 .
  23. Bright, Peter (4 December 2015). "Windows Server 2016 moving to per core, not per socket, licensing". Ars Technica . Condé Nast. Archived from the original on 4 December 2015. Retrieved 5 December 2015.
  24. Compare: "The Licensing Of Oracle Technology Products". OMT-CO Operations Management Technology Consulting GmbH. Archived from the original on 2014-03-21. Retrieved 2014-03-04.
  25. "6WINDGATE Software: Network Optimization Software – SDN Software – Control Plane Software | 6WIND".
  26. "Sempron™ 3850 APU with Radeon™ R3 Series | AMD". AMD. Archived from the original on 4 May 2019. Retrieved 5 May 2019.
  27. "Intel® Atom™ Processor C Series Product Specifications". ark.intel.com. Retrieved 2019-05-04.
  28. "Intel® Atom™ Processor Z Series Product Specifications". ark.intel.com. Retrieved 2019-05-04.
  29. "Intel Preps Dual-Core Celeron Processors". 11 October 2007. Archived from the original on 4 November 2007. Retrieved 12 November 2007.
  30. "Intel® Celeron® Processor J Series Product Specifications". ark.intel.com. Retrieved 2019-05-04.
  31. "Products formerly Yonah". ark.intel.com. Retrieved 2019-05-04.
  32. "Products formerly Conroe". ark.intel.com. Retrieved 2019-05-04.
  33. "Products formerly Kentsfield". ark.intel.com. Retrieved 2019-05-04.
  34. "Intel® Core™ X-series Processors Product Specifications". ark.intel.com. Retrieved 2019-05-04.
  35. "Intel® Itanium® Processor Product Specifications". ark.intel.com. Retrieved 2019-05-04.
  36. "Intel® Pentium® Processor D Series Product Specifications". ark.intel.com. Retrieved 2019-05-04.
  37. Zazaian, Mike (September 26, 2006). "Intel: 80 Cores by 2011". Archived from the original on 2006-11-09. Retrieved 2006-09-28.
  38. Kowaliski, Cyril (February 18, 2014). "Intel releases 15-core Xeon E7 v2 processor". Archived from the original on 2014-10-11.
  39. "Intel Xeon Processor E7 v3 Family". Intel. Archived from the original on 2015-07-07.
  40. "Intel Xeon Processor E7 v2 Family". Intel. Archived from the original on 2015-07-07.
  41. "Intel Xeon Processor E3 v2 Family". Intel. Archived from the original on 2015-07-07.
  42. "Intel shows off Xeon Platinum CPU with up to 56 cores and 112 threads". TechSpot. 2 April 2019. Retrieved 2019-05-04.
  43. PDF, Download. "2nd Gen Intel® Xeon® Scalable Processors Brief". Intel. Retrieved 2019-05-04.
  44. "Intel® Xeon Phi™ x100 Product Family Product Specifications". ark.intel.com. Retrieved 2019-05-04.
  45. "Intel® Xeon Phi™ 72x5 Processor Family Product Specifications". ark.intel.com. Retrieved 2019-05-04.
  46. Cole, Bernard (September 24, 2008). "40-core processor with Forth-based IDE tools unveiled".
  47. Hammond, Lance; et al. (1999). The Stanford Hydra CMP (PDF). Hot Chips. Retrieved 27 June 2023.
  48. Chacos, Brad (June 20, 2016). "Meet KiloCore, a 1,000-core processor so efficient it could run on a AA battery". PC World . Archived from the original on June 23, 2016.
  49. "COSMIC Heterogeneous Multiprocessor Benchmark Suite". Archived from the original on 2015-07-03.

Further reading