Established | 1993 |
---|---|
Affiliation | University of Alaska Fairbanks |
Director | Dr. Gregory Newby |
Administrative staff | 30 |
Location | , |
Website | arsc.edu |
The Arctic Region Supercomputing Center (ARSC) was from 1993 to 2015 a research facility organized under the University of Alaska Fairbanks (UAF). Located on the UAF campus, ARSC offered high-performance computing (HPC) and mass storage to the UAF and State of Alaska research communities.
In general, the research supported with ARSC resources focused on the Earth's arctic region. Common projects included arctic weather modeling, Alaskan summer smoke forecasting, arctic sea ice analysis and tracking, Arctic Ocean systems, volcanic ash plume prediction, and tsunami forecasting and modeling.
ARSC was a Distributed Center (DC), an Allocated Distributed Center (ADC) and then one of six DoD Supercomputing Resource Centers (DSRCs) of the Department of Defense (DoD) High Performance Computing Modernization Program (HPCMP) from 1993 through 2011.
ARSC hosted a variety of HPC systems many of which were listed as among the Top 500 most powerful in the world. For more than 10 years ARSC maintained the standing of at least one system on the Top 500 list. [1] Funding for ARSC operations was primarily supplied by the DoD HPCMP, with augmentation through UAF and external grants and contracts from various sources such as the National Science Foundation. In December 2010, the Fairbanks Daily News-Miner reported probable layoffs for most of Arctic Region Supercomputer Center's 46 employees with the loss of its Department of Defense contract in 2011. [2] The article reported that 95 percent of ARSC funding comes from the Department of Defense. When that DoD funding source was lost ARSC could no longer afford computers that could be listed on the Top 500 List.
The following timeline includes various HPC systems acquired by ARSC and a Top 500 list standing when appropriate:
A supercomputer is a type of computer with a high level of performance as compared to a general-purpose computer. The performance of a supercomputer is commonly measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). Since 2017, supercomputers have existed, which can perform over 1017 FLOPS (a hundred quadrillion FLOPS, 100 petaFLOPS or 100 PFLOPS). For comparison, a desktop computer has performance in the range of hundreds of gigaFLOPS (1011) to tens of teraFLOPS (1013). Since November 2017, all of the world's fastest 500 supercomputers run on Linux-based operating systems. Additional research is being conducted in the United States, the European Union, Taiwan, Japan, and China to build faster, more powerful and technologically superior exascale supercomputers.
Floating point operations per second is a measure of computer performance in computing, useful in fields of scientific computations that require floating-point calculations.
The ETA10 is a vector supercomputer designed, manufactured, and marketed by ETA Systems, a spin-off division of Control Data Corporation (CDC). The ETA10 was an evolution of the CDC Cyber 205, which can trace its origins back to the CDC STAR-100, one of the first vector supercomputers to be developed.
High-performance computing (HPC) uses supercomputers and computer clusters to solve advanced computation problems.
NEC SX describes a series of vector supercomputers designed, manufactured, and marketed by NEC. This computer series is notable for providing the first computer to exceed 1 gigaflop, as well as the fastest supercomputer in the world between 1992–1993, and 2002–2004. The current model, as of 2018, is the SX-Aurora TSUBASA.
The Cray T3E was Cray Research's second-generation massively parallel supercomputer architecture, launched in late November 1995. The first T3E was installed at the Pittsburgh Supercomputing Center in 1996. Like the previous Cray T3D, it was a fully distributed memory machine using a 3D torus topology interconnection network. The T3E initially used the DEC Alpha 21164 (EV5) microprocessor and was designed to scale from 8 to 2,176 Processing Elements (PEs). Each PE had between 64 MB and 2 GB of DRAM and a 6-way interconnect router with a payload bandwidth of 480 MB/s in each direction. Unlike many other MPP systems, including the T3D, the T3E was fully self-hosted and ran the UNICOS/mk distributed operating system with a GigaRing I/O subsystem integrated into the torus for network, disk and tape I/O.
Scalable POWERparallel (SP) is a series of supercomputers from IBM. SP systems were part of the IBM RISC System/6000 (RS/6000) family, and were also called the RS/6000 SP. The first model, the SP1, was introduced in February 1993, and new models were introduced throughout the 1990s until the RS/6000 was succeeded by eServer pSeries in October 2000. The SP is a distributed memory system, consisting of multiple RS/6000-based nodes interconnected by an IBM-proprietary switch called the High Performance Switch (HPS). The nodes are clustered using software called PSSP, which is mainly written in Perl.
Roadrunner was a supercomputer built by IBM for the Los Alamos National Laboratory in New Mexico, USA. The US$100-million Roadrunner was designed for a peak performance of 1.7 petaflops. It achieved 1.026 petaflops on May 25, 2008, to become the world's first TOP500 LINPACK sustained 1.0 petaflops system.
The TOP500 project ranks and details the 500 most powerful non-distributed computer systems in the world. The project was started in 1993 and publishes an updated list of the supercomputers twice a year. The first of these updates always coincides with the International Supercomputing Conference in June, and the second is presented at the ACM/IEEE Supercomputing Conference in November. The project aims to provide a reliable basis for tracking and detecting trends in high-performance computing and bases rankings on HPL benchmarks, a portable implementation of the high-performance LINPACK benchmark written in Fortran for distributed-memory computers.
The United States Department of Defense High Performance Computing Modernization Program (HPCMP) was initiated in 1992 in response to Congressional direction to modernize the Department of Defense (DoD) laboratories’ high performance computing capabilities. The HPCMP provides supercomputers, a national research network, high-end software tools, a secure environment, and computational science experts that together enable the Defense laboratories and test centers to conduct research, development, test and technology evaluation activities.
Magerit is one of the most powerful supercomputers in Spain. It also reached the second best Spanish position in the TOP500 list of supercomputers. It is installed in CeSViMa, a research center of the Technical University of Madrid.
Jaguar or OLCF-2 was a petascale supercomputer built by Cray at Oak Ridge National Laboratory (ORNL) in Oak Ridge, Tennessee. The massively parallel Jaguar had a peak performance of just over 1,750 teraFLOPS. It had 224,256 x86-based AMD Opteron processor cores, and operated with a version of Linux called the Cray Linux Environment. Jaguar was a Cray XT5 system, a development from the Cray XT4 supercomputer.
This list compares various amounts of computing power in instructions per second organized by order of magnitude in FLOPS.
The Swiss National Supercomputing Centre is the national high-performance computing centre of Switzerland. It was founded in Manno, canton Ticino, in 1991. In March 2012, the CSCS moved to its new location in Lugano-Cornaredo.
The K computer – named for the Japanese word/numeral "kei" (京), meaning 10 quadrillion (1016) – was a supercomputer manufactured by Fujitsu, installed at the Riken Advanced Institute for Computational Science campus in Kobe, Hyōgo Prefecture, Japan. The K computer was based on a distributed memory architecture with over 80,000 compute nodes. It was used for a variety of applications, including climate research, disaster prevention and medical research. The K computer's operating system was based on the Linux kernel, with additional drivers designed to make use of the computer's hardware.
The history of supercomputing goes back to the 1960s when a series of computers at Control Data Corporation (CDC) were designed by Seymour Cray to use innovative designs and parallelism to achieve superior computational peak performance. The CDC 6600, released in 1964, is generally considered the first supercomputer. However, some earlier computers were considered supercomputers for their day such as the 1954 IBM NORC in the 1950s, and in the early 1960s, the UNIVAC LARC (1960), the IBM 7030 Stretch (1962), and the Manchester Atlas (1962), all of which were of comparable power.
Several centers for supercomputing exist across Europe, and distributed access to them is coordinated by European initiatives to facilitate high-performance computing. One such initiative, the HPC Europa project, fits within the Distributed European Infrastructure for Supercomputing Applications (DEISA), which was formed in 2002 as a consortium of eleven supercomputing centers from seven European countries. Operating within the CORDIS framework, HPC Europa aims to provide access to supercomputers across Europe.
XK7 is a supercomputing platform, produced by Cray, launched on October 29, 2012. XK7 is the second platform from Cray to use a combination of central processing units ("CPUs") and graphical processing units ("GPUs") for computing; the hybrid architecture requires a different approach to programming to that of CPU-only supercomputers. Laboratories that host XK7 machines host workshops to train researchers in the new programming languages needed for XK7 machines. The platform is used in Titan, the world's second fastest supercomputer in the November 2013 list as ranked by the TOP500 organization. Other customers include the Swiss National Supercomputing Centre which has a 272 node machine and Blue Waters has a machine that has Cray XE6 and XK7 nodes that performs at approximately 1 petaFLOPS (1015 floating-point operations per second).
The Cray XC30 is a massively parallel multiprocessor supercomputer manufactured by Cray. It consists of Intel Xeon processors, with optional Nvidia Tesla or Xeon Phi accelerators, connected together by Cray's proprietary "Aries" interconnect, stored in air-cooled or liquid-cooled cabinets. Each liquid-cooled cabinet can contain up to 48 blades, each with eight CPU sockets, and uses 90 kW of power. The XC series supercomputers are available with the Cray DataWarp applications I/O accelerator technology.
Aurora is a supercomputer that was sponsored by the United States Department of Energy (DOE) and designed by Intel and Cray for the Argonne National Laboratory. It has been the second fastest supercomputer in the world since 2023. It is expected that after optimizing its performance it will exceed 2 ExaFLOPS, making it the fastest computer ever.