SciEngines GmbH

Last updated

SciEngines GmbH is a privately owned company founded 2007 as a spin-off of the COPACOBANA [1] project by the Universities of Bochum and Kiel, both in Germany. The project intended to create a platform for an affordable Custom hardware attack. COPACOBANA [2] is a massively-parallel reconfigurable computer. It can be utilized to perform a so-called Brute force attack to recover DES [3] [4] encrypted data. It consists of 120 commercially available, reconfigurable integrated circuits (FPGAs). These Xilinx Spartan3-1000 run in parallel, and create a massively parallel system. Since 2007, SciEngines GmbH has enhanced and developed successors of COPACOBANA. Furthermore, the COPACOBANA has become a well known reference platform for cryptanalysis and custom hardware based attacks to symmetric, asymmetric cyphers and stream ciphers. 2008 attacks against A5/1 stream cipher an encryption system been used to encrypt voice streams in GSM have been published as the first known real world attack utilizing off-the-shelf custom hardware. [5] [6]

They introduced in 2008 their RIVYERA S3-5000 [7] enhancing the performance of the computer dramatically via using 128 Spartan-3 5000's. Currently SciEngines RIVYERA holds the record in brute-force breaking DES utilizing 128 Spartan-3 5000 FPGAs. [8] Current systems provide a unique density of up to 256 Spartan-6 FPGAs per single system enabling scientific utilization beyond the field of cryptanalysis, like bioinformatics. [9]

2006 original developers of the COPACOBANA [10] form the company
2007 introduction of the COPACOBANA (Copacobana S3-1000) as a [COTS]
2007 first demonstration of COPACOBANA 5000 [11]
2008 they introduced RIVYERA S3-5000, the direct successor of COPACOBANA 5000 and COPACOBANA. The RIVYERA architecture introduced a new high performance optimized bus system and a fully API encapsulated communication framework.
2008 demonstration of the COPACOBANA V4-SX35, a 128 Virtex-4 SX35 FPGA cluster (COPACOBANA shared bus architecture)
2008 introduction of the RIVYERA V4-SX35, a 128 Virtex-4 SX35 FPGA cluster (RIVYERA HPC architecture)
2009 they introduced RIVYERA S6-LX150.
2011 they introduced 256 User usable FPGAs per RIVYERA S6-LX150 computer.


Providing a standard off-the-shelf Intel CPU and mainboard integrated into the FPGA computer RIVYERA [12] systems allow to execute most standard code without modifications. SciEngines aims that programmers only have to focus on porting the most time-consuming 5% of their code to the FPGA. Therefore, they bundle an Eclipse like development environment which allows code implementation in hardware based implementation languages e.g. VHDL, Verilog as well as in C based languages. An Application Programming Interface in C, C++, Java and Fortran allow scientists and programmers to adopt their code to benefit from an application-specific hardware architecture.

Related Research Articles

<span class="mw-page-title-main">Field-programmable gate array</span> Array of logic gates that are reprogrammable

A field-programmable gate array (FPGA) is a type of integrated circuit that can be programmed or reprogrammed after manufacturing. It consists of an array of programmable logic block and interconnects that can be configured to perform various digital functions. FPGAs are commonly used in applications where flexibility, speed, and parallel processing capabilities are required, such as in telecommunications, automotive, aerospace, and industrial sectors.

<span class="mw-page-title-main">Parallel computing</span> Programming paradigm in which many processes are executed simultaneously

Parallel computing is a type of computation in which many calculations or processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. There are several different forms of parallel computing: bit-level, instruction-level, data, and task parallelism. Parallelism has long been employed in high-performance computing, but has gained broader interest due to the physical constraints preventing frequency scaling. As power consumption by computers has become a concern in recent years, parallel computing has become the dominant paradigm in computer architecture, mainly in the form of multi-core processors.

Reconfigurable computing is a computer architecture combining some of the flexibility of software with the high performance of hardware by processing with very flexible high speed computing fabrics like field-programmable gate arrays (FPGAs). The principal difference when compared to using ordinary microprocessors is the ability to make substantial changes to the datapath itself in addition to the control flow. On the other hand, the main difference from custom hardware, i.e. application-specific integrated circuits (ASICs) is the possibility to adapt the hardware during runtime by "loading" a new circuit on the reconfigurable fabric.

General-purpose computing on graphics processing units is the use of a graphics processing unit (GPU), which typically handles computation only for computer graphics, to perform computation in applications traditionally handled by the central processing unit (CPU). The use of multiple video cards in one computer, or large numbers of graphics chips, further parallelizes the already parallel nature of graphics processing.

<span class="mw-page-title-main">Hardware acceleration</span> Specialized computer hardware

Hardware acceleration is the use of computer hardware designed to perform specific functions more efficiently when compared to software running on a general-purpose central processing unit (CPU). Any transformation of data that can be calculated in software running on a generic CPU can also be calculated in custom-made hardware, or in some mix of both.

<span class="mw-page-title-main">Custom hardware attack</span> Concept in cryptography

In cryptography, a custom hardware attack uses specifically designed application-specific integrated circuits (ASIC) to decipher encrypted messages.

Nallatech is a computer hardware and software firm based in Camarillo, California, United States. The company specializes in field-programmable gate array (FPGA) integrated circuit technology applied in computing. As of 2007 the company's primary markets include defense and high-performance computing. Nallatech was acquired by Interconnect Systems, Inc. in 2008, which in turn was bought by Molex in 2016.

Impulse C is a subset of the C programming language combined with a C-compatible function library supporting parallel programming, in particular for programming of applications targeting FPGA devices. It is developed by Impulse Accelerated Technologies of Kirkland, Washington.

This is a glossary of terms used in the field of Reconfigurable computing and reconfigurable computing systems, as opposed to the traditional Von Neumann architecture.

Mitrionics was a Swedish company manufacturing softcore reconfigurable processors. It has been mentioned as one of EETimes "60 Emerging startups". The company was founded in 2001 by Stefan Möhl and Pontus Borg to commercialize a massively parallel reconfigurable processor implemented on FPGAs. It can be described as turning general purpose chips into massive parallel processors that can be used for high performance computing. Mitrionics massively parallel processor is available on Cray, Nallatech, and Silicon Graphics systems.

The Advanced Learning and Research Institute (ALaRI), a faculty of informatics, was established in 1999 at the University of Lugano to promote research and education in embedded systems. The Faculty of Informatics within very few years has become one of the Switzerland major destinations for teaching and research, ranking third after the two Federal Institutes of Technology, Zurich and Lausanne.

Ambric, Inc. was a designer of computer processors that developed the Ambric architecture. Its Am2045 Massively Parallel Processor Array (MPPA) chips were primarily used in high-performance embedded systems such as medical imaging, video, and signal-processing.

A massively parallel processor array, also known as a multi purpose processor array (MPPA) is a type of integrated circuit which has a massively parallel array of hundreds or thousands of CPUs and RAM memories. These processors pass work to one another through a reconfigurable interconnect of channels. By harnessing a large number of processors working in parallel, an MPPA chip can accomplish more demanding tasks than conventional chips. MPPAs are based on a software parallel programming model for developing high-performance embedded system applications.

QPACE is a massively parallel and scalable supercomputer designed for applications in lattice quantum chromodynamics.

Computing with Memory refers to computing platforms where function response is stored in memory array, either one or two-dimensional, in the form of lookup tables (LUTs) and functions are evaluated by retrieving the values from the LUTs. These computing platforms can follow either a purely spatial computing model, as in field-programmable gate array (FPGA), or a temporal computing model, where a function is evaluated across multiple clock cycles. The latter approach aims at reducing the overhead of programmable interconnect in FPGA by folding interconnect resources inside a computing element. It uses dense two-dimensional memory arrays to store large multiple-input multiple-output LUTs. Computing with Memory differs from Computing in Memory or processor-in-memory (PIM) concepts, widely investigated in the context of integrating a processor and memory on the same chip to reduce memory latency and increase bandwidth. These architectures seek to reduce the distance the data travels between the processor and the memory. The Berkeley IRAM project is one notable contribution in the area of PIM architectures.

Heterogeneous computing refers to systems that use more than one kind of processor or core. These systems gain performance or energy efficiency not just by adding the same type of processors, but by adding dissimilar coprocessors, usually incorporating specialized processing capabilities to handle particular tasks.

<span class="mw-page-title-main">Advanced Simulation Library</span>

Advanced Simulation Library (ASL) is free and open-source hardware-accelerated multiphysics simulation platform. It enables users to write customized numerical solvers in C++ and deploy them on a variety of massively parallel architectures, ranging from inexpensive FPGAs, DSPs and GPUs up to heterogeneous clusters and supercomputers. Its internal computational engine is written in OpenCL and utilizes matrix-free solution techniques. ASL implements variety of modern numerical methods, i.a. level-set method, lattice Boltzmann, immersed Boundary. Mesh-free, immersed boundary approach allows users to move from CAD directly to simulation, reducing pre-processing efforts and number of potential errors. ASL can be used to model various coupled physical and chemical phenomena, especially in the field of computational fluid dynamics. It is distributed under the free GNU Affero General Public License with an optional commercial license.

An AI accelerator is a class of specialized hardware accelerator or computer system designed to accelerate artificial intelligence and machine learning applications, including artificial neural networks and machine vision. Typical applications include algorithms for robotics, Internet of Things, and other data-intensive or sensor-driven tasks. They are often manycore designs and generally focus on low-precision arithmetic, novel dataflow architectures or in-memory computing capability. As of 2018, a typical AI integrated circuit chip contains billions of MOSFET transistors. A number of vendor-specific terms exist for devices in this category, and it is an emerging technology without a dominant design.

<span class="mw-page-title-main">Lesley Shannon</span>

Lesley Shannon is a Canadian professor who is Chair for the Computer Engineering Option in the School of Engineering Science at Simon Fraser University. She is also the current NSERC Chair for Women in Science and Engineering for BC and Yukon. Shannon's chair operates the Westcoast Women in Engineering, Science and Technology (WWEST) program to promote equity, diversity and inclusion in STEM.

A domain-specific architecture (DSA) is a programmable computer architecture specifically tailored to operate very efficiently within the confines of a given application domain. The term is often used in contrast to general-purpose architectures, such as CPUs, that are designed to operate on any computer program.

References

  1. "COPACOBANA Project".
  2. "COPACOBANA : FPGA based DES Cracker". 2009-08-16.
  3. "SHARCS Workshop, April 3.- 4., 2006, Cologne, How to Break DES for € 8,980" (PDF).
  4. "COPACOBANA in german computer magazine c't".
  5. "A Real-World Attack Breaking A5/1 within Hours" (PDF).
  6. "Hardware-Based Cryptanalysis of the GSM A5/1 Encryption Algorithm" (PDF).
  7. "RIVYERA from SciEngines".
  8. "Break DES in less than a single day" (Press release). Demonstrated at 2009 Workshop.
  9. Forster, Michael; Szymczak, Silke; Ellinghaus, David; Hemmrich, Georg; Rühlemann, Malte; Kraemer, Lars; Mucha, Sören; Wienbrandt, Lars; Stanulla, Martin; Franke, Andre; Franke, A. (2015). "Vy-PER: eliminating false positive detection of virus integration events in next generation sequencing data". Scientific Reports. 5: 11534. Bibcode:2015NatSR...511534.. doi:10.1038/srep11534. PMC   4499804 . PMID   26166306.
  10. "COPACOBANA : FPGA based DES Cracker". 2009-08-16.
  11. "RIVYERA from SciEngines" (PDF).
  12. "HOCHLEISTUNGSCLUSTER RIVYER".

Further reading