SAGA-220

Last updated

SAGA-220
Active2 May 2011
Location Vikram Sarabhai Space Centre (VSSC), Thiruvananthapuram, India
Speed220 TeraFLOPS
Cost INR 140,000,000
Purpose Aeronautics Study

SAGA-220 (Supercomputer for Aerospace with GPU Architecture-220 teraflops [1] ) is a supercomputer built by the Indian Space Research Organisation (ISRO).

It was unveiled on 2 May 2011 by Dr K. Radhakrishnan, chairman, ISRO. [2] As of 8 January 2018, it is not the fastest supercomputer in India. It has been surpassed by the Pratyush supercomputer [1] with a maximum theoretical speed of 4.0 PetaFlops.

Located at the Satish Dhawan Supercomputing Facility at Vikram Sarabhai Space Centre (VSSC), Thiruvananthapuram, [2] it was built using commercially available hardware, open source software components and in house developments. The system uses 400 NVIDIA Tesla C2070 GPUs and 400 Intel Quad Core Xeon CPUs supplied by WIPRO. [3] Each NVIDIA Tesla C2070 GPU is capable of delivering 515 gigaflops compared to the Xeon CPU's more modest contribution of 50 gigaflops. [2] The system cost about INR 140,000,000 to build. [2] The system consumes only about 150  kW. [1]

The system is being used by scientists to solve complex aeronautical problems. It has been hinted that it will be used to design future space launch vehicles. [1]

In June 2012, SAGA-220 was ranked 86th on the Top500 list. By June 2015, it was ranked 422nd. [4]

See also

Related Research Articles

<span class="mw-page-title-main">Supercomputer</span> Type of extremely powerful computer

A supercomputer is a type of computer with a high level of performance as compared to a general-purpose computer. The performance of a supercomputer is commonly measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). Since 2017, supercomputers have existed, which can perform over 1017 FLOPS (a hundred quadrillion FLOPS, 100 petaFLOPS or 100 PFLOPS). For comparison, a desktop computer has performance in the range of hundreds of gigaFLOPS (1011) to tens of teraFLOPS (1013). Since November 2017, all of the world's fastest 500 supercomputers run on Linux-based operating systems. Additional research is being conducted in the United States, the European Union, Taiwan, Japan, and China to build faster, more powerful and technologically superior exascale supercomputers.

Floating point operations per second is a measure of computer performance in computing, useful in fields of scientific computations that require floating-point calculations.

The Texas Advanced Computing Center (TACC) at the University of Texas at Austin, United States, is an advanced computing research center that is based on comprehensive advanced computing resources and supports services to researchers in Texas and across the U.S. The mission of TACC is to enable discoveries that advance science and society through the application of advanced computing technologies. Specializing in high performance computing, scientific visualization, data analysis & storage systems, software, research & development and portal interfaces, TACC deploys and operates advanced computational infrastructure to enable the research activities of faculty, staff, and students of UT Austin. TACC also provides consulting, technical documentation, and training to support researchers who use these resources. TACC staff members conduct research and development in applications and algorithms, computing systems design/architecture, and programming tools and environments.

<span class="mw-page-title-main">TOP500</span> Database project devoted to the ranking of computers

The TOP500 project ranks and details the 500 most powerful non-distributed computer systems in the world. The project was started in 1993 and publishes an updated list of the supercomputers twice a year. The first of these updates always coincides with the International Supercomputing Conference in June, and the second is presented at the ACM/IEEE Supercomputing Conference in November. The project aims to provide a reliable basis for tracking and detecting trends in high-performance computing and bases rankings on HPL benchmarks, a portable implementation of the high-performance LINPACK benchmark written in Fortran for distributed-memory computers.

EKA, is a supercomputer built by the Computational Research Laboratories, a company founded by Dr. Narendra Karmarkar, for scaling up a supercomputer architecture he designed at the Tata Institute of Fundamental Research with a group of his students and project assistants over a period of 6 years.

The Green500 is a biannual ranking of supercomputers, from the TOP500 list of supercomputers, in terms of energy efficiency. The list measures performance per watt using the TOP500 measure of high performance LINPACK benchmarks at double-precision floating-point format.

<span class="mw-page-title-main">Tianhe-1</span> Supercomputer

Tianhe-I, Tianhe-1, or TH-1 is a supercomputer capable of an Rmax of 2.5 peta FLOPS. Located at the National Supercomputing Center of Tianjin, China, it was the fastest computer in the world from October 2010 to June 2011 and was one of the few petascale supercomputers in the world.

Nebulae is a petascale supercomputer located at the National Supercomputing Center in Shenzhen, Guangdong, China. Built from a Dawning TC3600 Blade system with Intel Xeon X5650 processors and Nvidia Tesla C2050 GPUs, it has a peak performance of 1.271 petaflops using the LINPACK benchmark suite. Nebulae was ranked the second most powerful computer in the world in the June 2010 list of the fastest supercomputers according to TOP500. Nebulae has a theoretical peak performance of 2.9843 petaflops. This computer is used for multiple applications requiring advanced processing capabilities. It is ranked 10th among the June 2012 list of top500.org.

<span class="mw-page-title-main">Supercomputing in Japan</span> Overview of supercomputing in Japan

Japan operates a number of centers for supercomputing which hold world records in speed, with the K computer being the world's fastest from June 2011 to June 2012, and Fugaku holding the lead from June 2020 until June 2022.

<span class="mw-page-title-main">Tsubame (supercomputer)</span> Series of supercomputers

Tsubame is a series of supercomputers that operates at the GSIC Center at the Tokyo Institute of Technology in Japan, designed by Satoshi Matsuoka.

<span class="mw-page-title-main">Supercomputing in Europe</span> Overview of supercomputing in Europe

Several centers for supercomputing exist across Europe, and distributed access to them is coordinated by European initiatives to facilitate high-performance computing. One such initiative, the HPC Europa project, fits within the Distributed European Infrastructure for Supercomputing Applications (DEISA), which was formed in 2002 as a consortium of eleven supercomputing centers from seven European countries. Operating within the CORDIS framework, HPC Europa aims to provide access to supercomputers across Europe.

<span class="mw-page-title-main">Xeon Phi</span> Series of x86 manycore processors from Intel

Xeon Phi was a series of x86 manycore processors designed and made by Intel. It was intended for use in supercomputers, servers, and high-end workstations. Its architecture allowed use of standard programming languages and application programming interfaces (APIs) such as OpenMP.

<span class="mw-page-title-main">Titan (supercomputer)</span> American supercomputer

Titan or OLCF-3 was a supercomputer built by Cray at Oak Ridge National Laboratory for use in a variety of science projects. Titan was an upgrade of Jaguar, a previous supercomputer at Oak Ridge, that uses graphics processing units (GPUs) in addition to conventional central processing units (CPUs). Titan was the first such hybrid to perform over 10 petaFLOPS. The upgrade began in October 2011, commenced stability testing in October 2012 and it became available to researchers in early 2013. The initial cost of the upgrade was US$60 million, funded primarily by the United States Department of Energy.

XK7 is a supercomputing platform, produced by Cray, launched on October 29, 2012. XK7 is the second platform from Cray to use a combination of central processing units ("CPUs") and graphical processing units ("GPUs") for computing; the hybrid architecture requires a different approach to programming to that of CPU-only supercomputers. Laboratories that host XK7 machines host workshops to train researchers in the new programming languages needed for XK7 machines. The platform is used in Titan, the world's second fastest supercomputer in the November 2013 list as ranked by the TOP500 organization. Other customers include the Swiss National Supercomputing Centre which has a 272 node machine and Blue Waters has a machine that has Cray XE6 and XK7 nodes that performs at approximately 1 petaFLOPS (1015 floating-point operations per second).

The Cray XC30 is a massively parallel multiprocessor supercomputer manufactured by Cray. It consists of Intel Xeon processors, with optional Nvidia Tesla or Xeon Phi accelerators, connected together by Cray's proprietary "Aries" interconnect, stored in air-cooled or liquid-cooled cabinets. Each liquid-cooled cabinet can contain up to 48 blades, each with eight CPU sockets, and uses 90 kW of power. The XC series supercomputers are available with the Cray DataWarp applications I/O accelerator technology.

<span class="mw-page-title-main">Cray XC40</span> Supercomputer manufactured by Cray

The Cray XC40 is a massively parallel multiprocessor supercomputer manufactured by Cray. It consists of Intel Haswell Xeon processors, with optional Nvidia Tesla or Intel Xeon Phi accelerators, connected together by Cray's proprietary "Aries" interconnect, stored in air-cooled or liquid-cooled cabinets. The XC series supercomputers are available with the Cray DataWarp applications I/O accelerator technology.

<span class="mw-page-title-main">Nvidia DGX</span> Line of Nvidia produced servers and workstations

Nvidia DGX is a line of Nvidia-produced servers and workstations which specialize in using GPGPU to accelerate deep learning applications. The typical design of a DGX system is based upon a rackmount chassis with motherboard that carries high performance x86 server CPUs. The main component of a DGX system is a set of 4 to 8 Nvidia Tesla GPU modules on an independent system board. DGX systems have large heatsinks and powerful fans to adequately cool thousands of watts of thermal output. The GPU modules are typically integrated into the system using a version of the SXM socket or by a PCIe x16 slot.

Piz Daint is a supercomputer in the Swiss National Supercomputing Centre, named after the mountain Piz Daint in the Swiss Alps.

<span class="mw-page-title-main">Taiwania (supercomputer)</span> Supercomputer of Taiwan

Taiwania is a supercomputer series in Taiwan owned by the National Applied Research Laboratories.

Christofari — are Christofari (2019), Christofari Neo (2021) supercomputers of Sberbank based on Nvidia corporation hardware Sberbank of Russia and Nvidia. Their main purpose is neural network learning. They are also used for scientific research and commercial calculations.

References

  1. 1 2 3 4 Nicole Hemsoth (2 May 2011). "Top Indian Supercomputer Boots Up at Space Center". HPCwire. Retrieved 9 January 2013.
  2. 1 2 3 4 "Welcome To ISRO :: Press Release :: May 02, 2011". Isro.org. 2 May 2011. Retrieved 9 January 2013.
  3. "Saga-220 is India'sfastest super computer". The New Indian Express. 16 May 2012. Retrieved 19 April 2024.
  4. "SAGA - Z24XX/SL390s Cluster, Xeon E5530/E5645 6C 2.40GHz, Infiniband QDR, NVIDIA 2090/2070". Top500.org. Retrieved 20 December 2014.