Personal supercomputer

Last updated

A personal supercomputer (PSC) is a marketing ploy used by computer manufacturers for high-performance computer systems and was a popular term in the mid 2000s to early 2010s. [1] There is no exact definition for what a personal supercomputer is. Many systems have had that label put on them like the Cray CX1 [2] and the Apple Power Mac G4. [3] Generally, though the label is used on computers that are high end workstations and servers and have multiple processors and is small enough to fit on a desk or to the side. Other terms like PSC are Desktop/deskside supercomputers and supercomputers in a box.

Contents

Deskside clusters

This is the closest thing to a formal definition of a personal supercomputer as most Computers marketed as personal supercomputers are Deskside clusters like the Cray CX1. A Deskside cluster is as defined by insideHPC.com “Deskside clusters come in a chassis that you can plug into the wall of your office, and they are designed to sit on the floor next to your desk. The chassis can hold a relatively small number of computers that are on blades, trays, or in enclosures that slide into the chassis and bundle everything together.” [4] The Blade or node are equipped with CPU/s and RAM also it may be equipped with GPU/s so that it may be used for CAD or for computation and the blade may have bays for hard drives built in.

Applications

They can be used in medical applications for processing brain and body scans, resulting in faster diagnosis. [5] Another application is persistent aerial surveillance where large amounts of video data need to be processed and stored. [6] Also they are used in AI for deep learning and machine learning. One more use is in the field of data analysis which requires large amounts of computational power.

Computers marketed as personal supercomputers

Deskside clusters

High End Workstations

Other computers

    Related Research Articles

    <span class="mw-page-title-main">Silicon Graphics</span> Former American computing company

    Silicon Graphics, Inc. was an American high-performance computing manufacturer, producing computer hardware and software. Founded in Mountain View, California in November 1981 by James Clark, its initial market was 3D graphics computer workstations, but its products, strategies and market positions developed significantly over time.

    <span class="mw-page-title-main">Supercomputer</span> Type of extremely powerful computer

    A supercomputer is a computer with a high level of performance as compared to a general-purpose computer. The performance of a supercomputer is commonly measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). Since 2017, supercomputers have existed which can perform over 1017 FLOPS (a hundred quadrillion FLOPS, 100 petaFLOPS or 100 PFLOPS). For comparison, a desktop computer has performance in the range of hundreds of gigaFLOPS (1011) to tens of teraFLOPS (1013). Since November 2017, all of the world's fastest 500 supercomputers run on Linux-based operating systems. Additional research is being conducted in the United States, the European Union, Taiwan, Japan, and China to build faster, more powerful and technologically superior exascale supercomputers.

    In computing, floating point operations per second is a measure of computer performance, useful in fields of scientific computations that require floating-point calculations. For such cases, it is a more accurate measure than measuring instructions per second.

    Cray Inc., a subsidiary of Hewlett Packard Enterprise, is an American supercomputer manufacturer headquartered in Seattle, Washington. It also manufactures systems for data storage and analytics. Several Cray supercomputer systems are listed in the TOP500, which ranks the most powerful supercomputers in the world.

    The Oak Ridge Leadership Computing Facility (OLCF), formerly the National Leadership Computing Facility, is a designated user facility operated by Oak Ridge National Laboratory and the Department of Energy. It contains several supercomputers, the largest of which is an HPE OLCF-5 named Frontier, which was ranked 1st on the TOP500 list of world's fastest supercomputers as of June 2023. It is located in Oak Ridge, Tennessee.

    <span class="mw-page-title-main">TOP500</span> Database project devoted to the ranking of computers

    The TOP500 project ranks and details the 500 most powerful non-distributed computer systems in the world. The project was started in 1993 and publishes an updated list of the supercomputers twice a year. The first of these updates always coincides with the International Supercomputing Conference in June, and the second is presented at the ACM/IEEE Supercomputing Conference in November. The project aims to provide a reliable basis for tracking and detecting trends in high-performance computing and bases rankings on HPL benchmarks, a portable implementation of the high-performance LINPACK benchmark written in Fortran for distributed-memory computers.

    The Green500 is a biannual ranking of supercomputers, from the TOP500 list of supercomputers, in terms of energy efficiency. The list measures performance per watt using the TOP500 measure of high performance LINPACK benchmarks at double-precision floating-point format.

    The Cray CX1 is a deskside workstation designed by Cray Inc., based on the x86-64 processor architecture. It was launched on September 16, 2008, and was discontinued in early 2012. It comprises a single chassis blade server design that supports a maximum of eight modular single-width blades, giving up to 96 processor cores. Computational load can be run independently on each blade and/or combined using clustering techniques.

    The National Center for Computational Sciences (NCCS) is a United States Department of Energy (DOE) Leadership Computing Facility that houses the Oak Ridge Leadership Computing Facility (OLCF), a DOE Office of Science User Facility charged with helping researchers solve challenging scientific problems of global interest with a combination of leading high-performance computing (HPC) resources and international expertise in scientific computing.

    <span class="mw-page-title-main">Tianhe-1</span> Supercomputer

    Tianhe-I, Tianhe-1, or TH-1 is a supercomputer capable of an Rmax of 2.5 peta FLOPS. Located at the National Supercomputing Center of Tianjin, China, it was the fastest computer in the world from October 2010 to June 2011 and was one of the few petascale supercomputers in the world.

    <span class="mw-page-title-main">University of Minnesota Supercomputing Institute</span>

    The Minnesota Supercomputing Institute (MSI) in Minneapolis, Minnesota is a core research facility of the University of Minnesota that provides hardware and software resources, as well as technical user support, to faculty and researchers at the university and at other institutions of higher education in Minnesota. MSI is located in Walter Library, on the university's Twin Cities campus.

    <span class="mw-page-title-main">Supercomputing in Europe</span> Overview of supercomputing in Europe

    Several centers for supercomputing exist across Europe, and distributed access to them is coordinated by European initiatives to facilitate high-performance computing. One such initiative, the HPC Europa project, fits within the Distributed European Infrastructure for Supercomputing Applications (DEISA), which was formed in 2002 as a consortium of eleven supercomputing centers from seven European countries. Operating within the CORDIS framework, HPC Europa aims to provide access to supercomputers across Europe.

    <span class="mw-page-title-main">Titan (supercomputer)</span> American supercomputer

    Titan or OLCF-3 was a supercomputer built by Cray at Oak Ridge National Laboratory for use in a variety of science projects. Titan was an upgrade of Jaguar, a previous supercomputer at Oak Ridge, that uses graphics processing units (GPUs) in addition to conventional central processing units (CPUs). Titan was the first such hybrid to perform over 10 petaFLOPS. The upgrade began in October 2011, commenced stability testing in October 2012 and it became available to researchers in early 2013. The initial cost of the upgrade was US$60 million, funded primarily by the United States Department of Energy.

    <span class="mw-page-title-main">Appro</span> American technology company

    Appro was a developer of supercomputing supporting High Performance Computing (HPC) markets focused on medium- to large-scale deployments. Appro was based in Milpitas, California with a computing center in Houston, Texas, and a manufacturing and support subsidiary in South Korea and Japan.

    <span class="mw-page-title-main">NVLink</span> High speed chip interconnect

    NVLink is a wire-based serial multi-lane near-range communications link developed by Nvidia. Unlike PCI Express, a device can consist of multiple NVLinks, and devices use mesh networking to communicate instead of a central hub. The protocol was first announced in March 2014 and uses a proprietary high-speed signaling interconnect (NVHS).

    <span class="mw-page-title-main">Volta (microarchitecture)</span> GPU microarchitecture by Nvidia

    Volta is the codename, but not the trademark, for a GPU microarchitecture developed by Nvidia, succeeding Pascal. It was first announced on a roadmap in March 2013, although the first product was not announced until May 2017. The architecture is named after 18th–19th century Italian chemist and physicist Alessandro Volta. It was NVIDIA's first chip to feature Tensor Cores, specially designed cores that have superior deep learning performance over regular CUDA cores. The architecture is produced with TSMC's 12 nm FinFET process. The Ampere microarchitecture is the successor to Volta.

    The Cray XC30 is a massively parallel multiprocessor supercomputer manufactured by Cray. It consists of Intel Xeon processors, with optional Nvidia Tesla or Xeon Phi accelerators, connected together by Cray's proprietary "Aries" interconnect, stored in air-cooled or liquid-cooled cabinets. Each liquid-cooled cabinet can contain up to 48 blades, each with eight CPU sockets, and uses 90 kW of power. The XC series supercomputers are available with the Cray DataWarp applications I/O accelerator technology.

    <span class="mw-page-title-main">Nvidia DGX</span> Line of Nvidia produced servers and workstations

    Nvidia DGX is a line of Nvidia-produced servers and workstations which specialize in using GPGPU to accelerate deep learning applications. The typical design of a DGX system is based upon a rackmount chassis with motherboard that carries high performance x86 server CPUs. The main component of a DGX system is a set of 4 to 16 Nvidia Tesla GPU modules on an independent system board. DGX systems have large heatsinks and powerful fans to adequately cool thousands of watts of thermal output. The GPU modules are typically integrated into the system using a version of the SXM socket or by a PCIe x16 slot.

    The Cray XC50 is a massively parallel multiprocessor supercomputer manufactured by Cray. The machine can support Intel Xeon processors, as well as Cavium ThunderX2 processors, Xeon Phi processors and NVIDIA Tesla P100 GPUs. The processors are connected by Cray's proprietary "Aries" interconnect, in a dragonfly network topology. The XC50 is an evolution of the XC40, with the main difference being the support of Tesla P100 processors and the use of Cray software release CLE 6 or 7.

    <span class="mw-page-title-main">SXM (socket)</span> High performance computing socket

    SXM is a high bandwidth socket solution for connecting Nvidia Compute Accelerators to a system. Each generation of Nvidia Tesla since P100 models, the DGX computer series and the HGX boards come with an SXM socket type that realizes high bandwidth, power delivery and more for the matching GPU daughter cards. Nvidia offers these combinations as an end-user product e.g. in their models of the DGX system series. Current socket generations are SXM for Pascal based GPUs, SXM2 and SXM3 for Volta based GPUs, SXM4 for Ampere based GPUs, and SXM5 for Hopper based GPUs. These sockets are used for specific models of these accelerators, and offer higher performance per card than PCIe equivalents. The DGX-1 system was the first to be equipped with SXM-2 sockets and thus was the first to carry the form factor compatible SXM modules with P100 GPUs and later was unveiled to be capable of allowing upgrading to SXM2 modules with V100 GPUs.

    References

    1. "Google Trends". Google Trends. Retrieved 2022-03-03.
    2. "Cray Unveils Personal Supercomputer". HPCwire. 2008-09-16. Retrieved 2022-03-03.
    3. Writer, Henry Norr, Chronicle Staff (1999-09-01). "Apple Unveils 'Personal Supercomputer'". SFGATE. Retrieved 2022-03-03.{{cite web}}: CS1 maint: multiple names: authors list (link)
    4. "Intro to HPC: what's a cluster?". insideHPC. Retrieved 2022-03-03.
    5. Wardrop, Murray (2008-12-05). "World's first personal supercomputer unveiled". Telegraph. Retrieved 2011-07-08.
    6. "PAKCK: Performance and Power Analysis of Key Computational Kernels on CPUs and GPUs", "particularly surveillance and reconnaissance, rely heavily on signal and image processing.", Retrieved on March 3nd 2022.
    7. Tyan , "TYAN ANNOUNCES AVAILABILITY OF QUAD-CORE TYANPSC T-600 SERIES PERSONAL SUPERCOMPUTER", Fremont, CA, March 19th, 2007. Date of archive March 28th, 2007. Retrieved on March 2nd 2022.
    8. J. Flatley, "SGI announces Octane III personal supercomputer", Engadget , San Francisco, September 22nd, 2009. Retrieved on March 2nd 2022.
    9. "NVIDIA DGX Station". www.altair.com. Retrieved 2022-03-03.