Type of business | Public |
---|---|
Available in | German |
Founded | 1985 |
Headquarters | , |
Area served | North America, South America, Europe, Asia Pacific |
Founder(s) | Falk-Dietrich Kübler, Gerhard Peise, Bernd Wolf |
Services | Surface inspection systems |
URL | http://www.parsytec.de |
Isra Vision Parsytec AG, a subsidiary of Isra Vision, was originally founded in 1985 as Parsytec (parallel system technology) in Aachen, Germany.
Parsytec gained recognition in the late 1980s and early 1990s as a manufacturer of transputer-based parallel systems. Its product lineup ranged from single transputer plug-in boards for IBM PCs to large, massively parallel systems with thousands of transputers (or processors), such as the Parsytec GC. Some sources describe the latter as ultracomputer-sized, scalable multicomputers (smC). [1] [2]
As part of ISRA VISION AG, the company now focuses on solutions in the machine vision and industrial image processing sectors. ISRA Parsytec products are primarily used for quality and surface inspection, particularly in the metal and paper industries.
Parsytec was founded in 1985 in Aachen, Germany, by Falk-Dietrich Kübler, Gerhard H. Peise, and Bernd Wolff, with an 800,000 DM grant from the Federal Ministry for Research and Technology (BMFT). [3]
Unlike SUPRENUM, Parsytec focused its systems, particularly in pattern recognition, on industrial applications such as surface inspection. As a result, the company not only captured a significant market share in European academia but also attracted numerous industrial customers, including many outside Germany. By 1988, exports accounted for approximately one-third of Parsytec's revenue. The company's turnover figures were as follows: zero in 1985, 1.5 million DM in 1986, 5.2 million DM in 1988, 9 million DM in 1989, 15 million DM in 1990, and 17 million USD in 1991.
To allow Parsytec to focus on research and development, a separate entity, ParaCom, was established to handle sales and marketing operations. While Parsytec/ParaCom maintained its headquarters in Aachen, Germany, it also operated subsidiary sales offices in Chemnitz (Germany), Southampton (United Kingdom), Chicago (USA), St Petersburg (Russia), and Moscow (Russia). [4] In Japan, Parsytec's machines were distributed by Matsushita. [3]
Between 1988 and 1994, Parsytec developed an impressive range of transputer-based computers, culminating in the "Parsytec GC" (GigaCluster). This system was available in configurations ranging from 64 to 16,384 transputers. [5]
Parsytec went public in mid-1999 with an initial public offering (IPO) on the German Stock Exchange in Frankfurt.
On 30 April 2006, founder Falk-D. Kübler left the company. [6]
In July 2007, [7] ISRA VISION AG acquired 52.6% of Parsytec AG. [8] The delisting of Parsytec shares from the stock market began in December of the same year, and since 18 April 2008, Parsytec shares have no longer been listed on the stock exchange. [9]
While Parsytec had a workforce of roughly 130 staff in the early 1990s, the ISRA VISION Group employed more than 500 people in 2012/2013. [10]
Today, the core business of ISRA Parsytec within the ISRA VISION Group is the development and distribution of surface inspection systems for strip products in the metal and paper industries.
Parsytec's product range included:
In total, approximately 700 stand-alone systems (SC and GC) had been shipped.
Initially, Parsytec participated in the GPMIMD (General Purpose MIMD) [11] project under the umbrella of the ESPRIT program, [12] both of which were funded by the European Commission's Directorate for Science. However, after significant disagreements with other participants—Meiko, Parsys, Inmos, and Telmat—regarding the choice of a common physical architecture, Parsytec left the project and announced its own T9000-based machine, the GC. Due to Inmos' issues with the T9000, Parsytec was forced to switch to a system using a combination of Motorola MPC 601 CPUs and Inmos T805 processors. This led to the development of Parsytec's "hybrid" systems (e.g., GC/PP), where transputers were used as communication processors while the computational tasks were offloaded to the PowerPCs.
Parsytec's cluster systems were operated by an external workstation, typically a SUN workstation (e.g., Sun-4). [13]
There is considerable confusion regarding the names of Parsytec products. This is partly due to the architecture, but also because of the aforementioned unavailability of the Inmos T9000, which forced Parsytec to use the T805 and PowerPC processors instead. Systems equipped with PowerPC processors were given the prefix "Power."
The architecture of GC systems is based on self-contained GigaCubes. The basic architectural element of a Parsytec system was a cluster, which consisted, among other components, of four transputers/processors (i.e., a cluster is a node in the classical sense).
A GigaCube (sometimes referred to as a supernode or meganode) [14] consisted of four clusters (nodes), each with 16 Inmos T805 transputers (30 MHz), RAM (up to 4 MB per T805), and an additional redundant T805 (the 17th processor). It also included local link connections and four Inmos C004 routing chips. Hardware fault tolerance was achieved by linking each T805 to a different C004. [15] The unusual spelling of x'plorer led to variations like xPlorer, and the Gigacluster is sometimes referred to as the GigaCube or Grand Challenge.
Megaframe [16] [17] was the product name for a family of transputer-based parallel processing modules, [18] some of which could be used to upgrade an IBM PC. [19] As a standalone system, a Megaframe could hold up to ten processor modules. Different versions of the modules were available, such as one featuring a 32-bit transputer T414 with floating-point hardware (Motorola 68881), 1 MB of RAM (80 nanosecond access time), and a throughput of 10 MIPS, or one with four 16-bit transputers (T22x) with 64 kB of RAM. Additionally, cards for special features were offered, including a graphics processor with a resolution of 1280 x 1024 pixels and an I/O "cluster" with terminal and SCSI interfaces. [20]
The MultiCluster-1 series consisted of statically configurable systems that could be tailored to specific user requirements, such as the number of processors, amount of memory, I/O configuration, and system topology. The required processor topology could be configured using UniLink connections, fed through a special backplane. Additionally, four external sockets were provided.
Multicluster-2 used network configuration units (NCUs) that provided flexible, dynamically configurable interconnection networks. The multiuser environment could support up to eight users through Parsytec's multiple virtual architecture software. The NCU design was based on the Inmos crossbar switch, the C004, which offers full crossbar connectivity for up to 16 transputers. Each NCU, made of C004s, connected up to 96 UniLinks, linking internal as well as external transputers and other I/O subsystems. MultiCluster-2 allowed for the configuration of various fixed interconnection topologies, such as tree or mesh structures. [14]
SuperCluster [21] had a hierarchical, cluster-based design. A basic unit was a 16-transputer T800, fully connected cluster, and larger systems included additional levels of NCUs to form the necessary connections. The Network Configuration Manager (NCM) software controlled the NCUs and dynamically established the required connections. Each transputer could be equipped with 1 to 32 MB of dynamic RAM, with single-error correction and double-error detection. [14]
The GigaCluster (GC) was a parallel computer produced in the early 1990s. A GigaCluster was composed of GigaCubes. [22]
Designed for the Inmos T9000 transputers, the GigaCluster could never be launched as originally planned, as the Inmos T9000 transputers never made it to market on time. This led to the development of the GC/PP (PowerPlus), in which two Motorola MPC 601 (80 MHz) were used as the dedicated CPUs, supported by four Inmos T805 transputers (30 MHz). [23]
While the GC/PP was a hybrid system, the GCel ("entry level") was based solely on the T805. [24] [25] The GCel was designed to be upgradeable to the T9000 transputers (had they arrived in time), thus becoming a full GC. Since the T9000 was Inmos' evolutionary successor to the T800, the upgrade was planned to be simple and straightforward. This was because, firstly, both transputers shared the same instruction set, and secondly, they had a similar performance ratio of compute power to communication throughput. A theoretical speed-up factor of 10 was expected, [22] but in the end, it was never achieved.
The network structure of the GC was a two-dimensional lattice, with an inter-communication speed between the nodes (i.e., clusters in Parsytec's terminology) of 20 Mbit/s. For its time, the concept of the GC was exceptionally modular and scalable.
A so-called GigaCube was a module that was already a one gigaflop system and served as the building block for larger systems. A module (or "cube" in Parsytec's terminology) contained:
By combining modules (or cubes, respectively), one could theoretically connect up to 16,384 processors to create a very powerful system.
Typical installations included:
System | Number of CPUs | Number of GigaCubes |
---|---|---|
GC-1 | 64 | 1 |
GC-2 | 256 | 4 |
GC-3 | 1024 | 16 |
GC-4 | 4096 | 48 |
GC-5 | 16384 | 256 |
The two largest installations of the GC that were actually shipped had 1,024 processors (16 modules, with 64 transputers per module) and were operated at the data centers of the Universities of Cologne and Paderborn. In October 2004, the system at Paderborn was transferred to the Heinz Nixdorf Museums Forum, [26] where it is now inoperable.
The power consumption of a system with 1,024 processors was approximately 27 kW, and its weight was nearly a ton. In 1992, the system was priced at around 1.5 million DM. While the smaller versions, up to GC-3, were air-cooled, water cooling was mandatory for the larger systems.
In 1992, a GC with 1,024 processors ranked on the TOP500 list [27] of the world's fastest supercomputer installations. In Germany alone, it was the 22nd fastest computer.
In 1995, there were nine Parsytec computers on the TOP500 list, including two GC/PP 192 installations, which ranked 117th and 188th. [28]
In 1996, they still ranked 230th and 231st on the TOP500 list. [29] [30]
The x'plorer model came in two versions: The initial version featured 16 transputers, each with access to 4 MB of RAM, and was called x'plorer. Later, when Parsytec switched to the PPC architecture, it was renamed POWERx'plorer and featured 8 MPC 601 CPUs. Both models were housed in the same desktop case, designed by Via 4 Design. [31]
In any model, the x'plorer was essentially a single "slice" — which Parsytec referred to as a cluster [32] — of a GigaCube (PPC or Transputer), with the smallest version (GC-1) using 4 of these clusters. As a result, some referred to it as a "GC-0.25." [33]
The POWERx'plorer was based on 8 processing units arranged in a 2D mesh. Each processing unit included:
The Parsytec CC (Cognitive Computer) system [35] [36] [37] was an autonomous unit at the card rack level.
The CC card rack subsystem provided the system with its infrastructure, including power supply and cooling. The system could be configured as a standard 19-inch rack-mounted unit, which accepted various 6U plug-in modules.
The CC system [38] was a distributed memory, message-passing parallel computer and is globally classified in the MIMD category of parallel computers.
There were two different versions available:
In all CC systems, the nodes were directly connected to the same router, which implemented an active hardware 8x8 crossbar switch for up to 8 connections using the 40 MB/s high-speed link.
Regarding the CCe, the software was based on IBM's AIX 4.1 UNIX operating system, along with Parsytec's parallel programming environment, Embedded PARIX (EPX). [40] This setup combined a standard UNIX environment (including compilers, tools, and libraries) with an advanced software development environment. The system was integrated into the local area network using standard Ethernet. As a result, a CC node had a peak performance of 266 MFLOPS. The peak performance of the 8-node CC system installed at Geneva University Hospital was therefore 2.1 GFLOPS. [41]
Powermouse was another scalable system that consisted of modules and individual components. It was a straightforward extension of the x'plorer system. [39] Each module (dimensions: 9 cm x 21 cm x 45 cm) contained four MPC 604 processors (200/300 MHz) and 64 MB of RAM, achieving a peak performance of 2.4 GFLOPS. A separate communication processor (T425) equipped with 4 MB of RAM [42] controlled the data flow in four directions to other modules in the system. The bandwidth of a single node was 9 MB/s.
For about 35,000 DM, a basic system consisting of 16 CPUs (i.e., four modules) could provide a total computing power of 9.6 Gflop/s. As with all Parsytec products, Powermouse required a Sun Sparcstation as the front-end.
All software, including PARIX with C++ and Fortran 77 compilers and debuggers (alternatively providing MPI or PVM as user interfaces), was included. [43]
The operating system used was PARIX (PARallel UnIX extensions) [44] – PARIXT8 for the T80x transputers and PARIXT9 for the T9000 transputers, respectively. Based on UNIX, PARIX [45] supported remote procedure calls and was compliant with the POSIX standard. PARIX provided UNIX functionality at the front-end (e.g., a Sun SPARCstation, which had to be purchased separately) with library extensions for the needs of the parallel system at the back-end, which was the Parsytec product itself (connected to the front-end for operation). The PARIX software package included components for the program development environment (compilers, tools, etc.) and the runtime environment (libraries). PARIX offered various types of synchronous and asynchronous communication.
In addition, Parsytec provided a parallel programming environment called Embedded PARIX (EPX). [40]
To develop parallel applications using EPX, data streams and function tasks were allocated to a network of nodes. The data handling between processors required only a few system calls. Standard routines for synchronous communication, such as send and receive, were available, as well as asynchronous system calls. The full set of EPX calls formed the EPX application programming interface (API). The destination for any message transfer was defined through a virtual channel that ended at any user-defined process. Virtual channels were managed by EPX and could be defined by the user. The actual message delivery system utilized the router. [41] Additionally, COSY (Concurrent Operating SYstem) [46] and Helios could also be run on the machines. Helios supported Parsytec's special reset mechanism out of the box.
A supercomputer is a type of computer with a high level of performance as compared to a general-purpose computer. The performance of a supercomputer is commonly measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). Since 2022, supercomputers have existed which can perform over 1018 FLOPS, so called exascale supercomputers. For comparison, a desktop computer has performance in the range of hundreds of gigaFLOPS (1011) to tens of teraFLOPS (1013). Since November 2017, all of the world's fastest 500 supercomputers run on Linux-based operating systems. Additional research is being conducted in the United States, the European Union, Taiwan, Japan, and China to build faster, more powerful and technologically superior exascale supercomputers.
Floating point operations per second is a measure of computer performance in computing, useful in fields of scientific computations that require floating-point calculations.
The Atari Transputer Workstation is a workstation class computer released by Atari Corporation in the late 1980s, based on the INMOS Transputer. It was introduced in 1987 as the Abaq, but the name was changed before sales began. Sales were almost non-existent, and the product was canceled after only a few hundred units were made.
The transputer is a series of pioneering microprocessors from the 1980s, intended for parallel computing. To support this, each transputer had its own integrated memory and serial communication links to exchange data with other transputers. They were designed and produced by Inmos, a semiconductor company based in Bristol, United Kingdom.
Meiko Scientific Ltd. was a British supercomputer company based in Bristol, founded by members of the design team working on the Inmos transputer microprocessor.
Blue Gene was an IBM project aimed at designing supercomputers that can reach operating speeds in the petaFLOPS (PFLOPS) range, with relatively low power consumption.
nCUBE was a series of parallel computing computers from the company of the same name. Early generations of the hardware used a custom microprocessor. With its final generations of servers, nCUBE no longer designed custom microprocessors for machines, but used server-class chips manufactured by a third party in massively parallel hardware deployments, primarily for the purposes of on-demand video.
iWarp was an experimental parallel supercomputer architecture developed as a joint project by Intel and Carnegie Mellon University. The project started in 1988, as a follow-up to CMU's previous WARP research project, in order to explore building an entire parallel-computing "node" in a single microprocessor, complete with memory and communications links. In this respect the iWarp is very similar to the INMOS transputer and nCUBE.
PARAM is a series of Indian supercomputers designed and assembled by the Centre for Development of Advanced Computing (C-DAC) in Pune. PARAM means "supreme" in the Sanskrit language, whilst also creating an acronym for "PARAllel Machine". As of November 2022, the fastest machine in the series is the PARAM Siddhi-AI which ranks 63rd in world, with an Rpeak of 5.267 petaflops.
Quadrics was a supercomputer company formed in 1996 as a joint venture between Alenia Spazio and the technical team from Meiko Scientific. They produced hardware and software for clustering commodity computer systems into massively parallel systems. Their highpoint was in June 2003 when six out of the ten fastest supercomputers in the world were based on Quadrics' interconnect. They officially closed on June 29, 2009.
The Octane series of IRIX workstations was developed and sold by SGI in the 1990s and 2000s. Octane and Octane2 are two-way multiprocessing-capable workstations, originally based on the MIPS Technologies R10000 microprocessor. Newer Octanes are based on the R12000 and R14000. The Octane2 has three improvements: a revised power supply, system board, and Xbow ASIC. The Octane2 has VPro graphics and supports all the VPro cards. Later revisions of the Octane include some of the improvements introduced in the Octane2. The codenames for the Octane and Octane2 are "Racer" and "Speedracer" respectively.
ASCI Red was the first computer built under the Accelerated Strategic Computing Initiative (ASCI), the supercomputing initiative of the United States government created to help the maintenance of the United States nuclear arsenal after the 1992 moratorium on nuclear testing.
Inmos International plc and two operating subsidiaries, Inmos Limited (UK) and Inmos Corporation (US), was a British semiconductor company founded by Iann Barron, Richard Petritz, and Paul Schroeder in July 1978. Inmos Limited’s head office and design office were at Aztec West business park in Bristol, England.
The Department of Computer Science is a department of the Faculty of Mathematics, Physics and Informatics at the Comenius University in Bratislava, the capital of Slovakia. It is headed by Prof. RNDr. Branislav Rovan, Phd.
Helios is a discontinued Unix-like operating system for parallel computers. It was developed and published by Perihelion Software. Its primary architecture is the Inmos Transputer. Helios' microkernel implements a distributed namespace and messaging protocol, through which services are accessed. A POSIX compatibility library enables the use of Unix application software, and the system provides most of the usual Unix utilities.
Michael David May is a British computer scientist. He is a Professor in the Department of Computer Science at the University of Bristol and founder of XMOS Semiconductor, serving until February 2014 as the chief technology officer.
The Intel Personal SuperComputer was a product line of parallel computers in the 1980s and 1990s. The iPSC/1 was superseded by the Intel iPSC/2, and then the Intel iPSC/860.
The Hollywood chipset, a key component of Nintendo's Wii video game console, is a system on a chip (SoC) that integrates a graphics processing unit (GPU), I/O interfaces, and audio capabilities. Designed by ATI, it was manufactured using a 90 nm or 65 nm CMOS process, similar to the Wii's central processing unit, Broadway.
The SGI Origin 2000 is a family of mid-range and high-end server computers developed and manufactured by Silicon Graphics (SGI). They were introduced in 1996 to succeed the SGI Challenge and POWER Challenge. At the time of introduction, these ran the IRIX operating system, originally version 6.4 and later, 6.5. A variant of the Origin 2000 with graphics capability is known as the Onyx2. An entry-level variant based on the same architecture but with a different hardware implementation is known as the Origin 200. The Origin 2000 was succeeded by the Origin 3000 in July 2000, and was discontinued on June 30, 2002.
BeeGFS is a parallel file system developed for high-performance computing. BeeGFS includes a distributed metadata architecture for scalability and flexibility reasons. It specializes in data throughput.