Embedded Supercomputing

Last updated

Embedded Supercomputing [1] (EmbSup) a relatively new solution which targets fine grain and coarse grain parallelism altogether. This combination thought to be a best way for exploiting fine and coarse grain parallelism by targeting fine grain parallelism towards FPGAs and coarse grained parallelism towards super computers or clusters.

Basically Embedded Supercomputing is a hybrid network of CPU and FPGA hardware, where FPGA acts as external co-processor to CPU. However, this programming model is still evolving and has many challenges.

Programming Model for EmbSup

Embedded Supercomputing Embedded SC.JPG
Embedded Supercomputing

Related Research Articles

Superscalar processor CPU that implements instruction-level parallelism within a single processor

A superscalar processor is a CPU that implements a form of parallelism called instruction-level parallelism within a single processor. In contrast to a scalar processor that can execute at most one single instruction per clock cycle, a superscalar processor can execute more than one instruction during a clock cycle by simultaneously dispatching multiple instructions to different execution units on the processor. It therefore allows for more throughput than would otherwise be possible at a given clock rate. Each execution unit is not a separate processor, but an execution resource within a single CPU such as an arithmetic logic unit.

System on a chip type of integrated circuit

A system on chip is an integrated circuit that integrates all components of a computer or other electronic system. These components always include a central processing unit (CPU), memory, input/output ports and secondary storage – all on a single substrate or microchip, the size of a coin. It must contain digital, analog, mixed-signal, and often radio frequency signal processing functions, otherwise it will only be considered as an "Application Processor". As they are integrated on a single substrate, SoCs consume much less power and take up much less area than multi-chip designs with equivalent functionality. Because of this, SoCs are very common in the mobile computing and edge computing markets. Systems-on-chip are typically fabricated using metal–oxide–semiconductor (MOS) technology, and are commonly used in embedded systems and the Internet of Things.

iWarp was an experimental parallel supercomputer architecture developed as a joint project by Intel and Carnegie Mellon University. The project started in 1988, as a follow-up to CMU's previous WARP research project, in order to explore building an entire parallel-computing "node" in a single microprocessor, complete with memory and communications links. In this respect the iWarp is very similar to the INMOS transputer and nCUBE.

Reconfigurable computing is a computer architecture combining some of the flexibility of software with the high performance of hardware by processing with very flexible high speed computing fabrics like field-programmable gate arrays (FPGAs). The principal difference when compared to using ordinary microprocessors is the ability to make substantial changes to the datapath itself in addition to the control flow. On the other hand, the main difference from custom hardware, i.e. application-specific integrated circuits (ASICs) is the possibility to adapt the hardware during runtime by "loading" a new circuit on the reconfigurable fabric.

Simultaneous multithreading (SMT) is a technique for improving the overall efficiency of superscalar CPUs with hardware multithreading. SMT permits multiple independent threads of execution to better utilize the resources provided by modern processor architectures.

Granularity, the condition of existing in granules or grains, refers to the extent to which a material or system is composed of distinguishable pieces. It can either refer to the extent to which a larger entity is subdivided, or the extent to which groups of smaller indistinguishable entities have joined together to become larger distinguishable entities.

Nios II is a 32-bit embedded-processor architecture designed specifically for the Altera family of field-programmable gate array (FPGA) integrated circuits. Nios II incorporates many enhancements over the original Nios architecture, making it more suitable for a wider range of embedded computing applications, from digital signal processing (DSP) to system-control.

The MicroBlaze is a soft microprocessor core designed for Xilinx field-programmable gate arrays (FPGA). As a soft-core processor, MicroBlaze is implemented entirely in the general-purpose memory and logic fabric of Xilinx FPGAs.

In computing, hardware acceleration is the use of computer hardware specially made to perform some functions more efficiently than is possible in software running on a general-purpose CPU. Any transformation of data or routine that can be computed, can be calculated purely in software running on a generic CPU, purely in custom-made hardware, or in some mix of both. An operation can be computed faster in application-specific hardware designed or programmed to compute the operation than specified in software and performed on a general-purpose computer processor. Each approach has advantages and disadvantages. The implementation of computing tasks in hardware to decrease latency and increase throughput is known as hardware acceleration.

Josh Fisher American and Spanish computer scientist

Joseph A "Josh" Fisher is an American and Spanish computer scientist noted for his work on VLIW architectures, compiling, and instruction-level parallelism, and for the founding of Multiflow Computer. He is a Hewlett-Packard Senior Fellow (Emeritus).

Stream processing is a computer programming paradigm, equivalent to dataflow programming, event stream processing, and reactive programming, that allows some applications to more easily exploit a limited form of parallel processing. Such applications can use multiple computational units, such as the floating point unit on a graphics processing unit or field-programmable gate arrays (FPGAs), without explicitly managing allocation, synchronization, or communication among those units.

In computer architecture, a transport triggered architecture (TTA) is a kind of processor design in which programs directly control the internal transport buses of a processor. Computation happens as a side effect of data transports: writing data into a triggering port of a functional unit triggers the functional unit to start a computation. This is similar to what happens in a systolic array. Due to its modular structure, TTA is an ideal processor template for application-specific instruction-set processors (ASIP) with customized datapath but without the inflexibility and design cost of fixed function hardware accelerators.

A soft microprocessor is a microprocessor core that can be wholly implemented using logic synthesis. It can be implemented via different semiconductor devices containing programmable logic, including both high-end and commodity variations.

Handel-C is a high-level programming language which targets low-level hardware, most commonly used in the programming of FPGAs. It is a rich subset of C, with non-standard extensions to control hardware instantiation with an emphasis on parallelism. Handel-C is to hardware design what the first high-level programming languages were to programming CPUs. Unlike many other design languages that target a specific architecture Handel-C can be compiled to a number of design languages and then synthesised to the corresponding hardware. This frees developers to concentrate on the programming task at hand rather than the idiosyncrasies of a specific design language and architecture.

Impulse C is a subset of the C programming language combined with a C-compatible function library supporting parallel programming, in particular for programming of applications targeting FPGA devices. It is developed by Impulse Accelerated Technologies of Kirkland, Washington.

Task parallelism is a form of parallelization of computer code across multiple processors in parallel computing environments. Task parallelism focuses on distributing tasks—concurrently performed by processes or threads—across different processors. In contrast to data parallelism which involves running the same task on different components of data, task parallelism is distinguished by running many different tasks at the same time on the same data. A common type of task parallelism is pipelining which consists of moving a single set of data through a series of separate tasks where each task can execute independently of the others.

Multithreading (computer architecture) ability of a central processing unit (CPU) or a single core in a multi-core processor to execute multiple processes or threads concurrently

In computer architecture, multithreading is the ability of a central processing unit (CPU) to provide multiple threads of execution concurrently, supported by the operating system. This approach differs from multiprocessing. In a multithreaded application, the threads share the resources of a single or multiple cores, which include the computing units, the CPU caches, and the translation lookaside buffer (TLB).

Plurality (company)

Plurality Ltd. is an Israeli semiconductor company, the developer of the HyperCore technology and the HAL multi-core processor. The company is a member of the Multicore Association.

The Xputer is a design for a reconfigurable computer, proposed by computer scientist Reiner Hartenstein. Hartenstein uses various terms to describe the various innovations in the design, including config-ware, flow-ware, morph-ware, and "anti-machine".

In parallel computing, granularity of a task is a measure of the amount of work which is performed by that task.

References

  1. Deconinck, Geert; De Florio, Vincenzo; A. Varvarigou, Theodora; AVerentziotis, Evangelos (March 2002). "The EFTOS Approach to Dependability in Embedded Supercomputing". IEEE Transactions on Reliability. 51: 76–90. doi:10.1109/24.994916.