NASA Advanced Supercomputing Division

Last updated
NASA Advanced Supercomputing Division
NASA Advanced Supercomputing Facility.jpg
Agency overview
Formed1982 (1982)
Preceding agencies
  • Numerical Aerodynamic Simulation Division (1982)
  • Numerical Aerospace Simulation Division (1995)
Headquarters NASA Ames Research Center, Moffett Field, California
37°25′16″N122°03′53″W / 37.42111°N 122.06472°W / 37.42111; -122.06472
Agency executive
  • Piyush Mehrotra, Division Chief
Parent departmentAmes Research Center Exploration Technology Directorate
Parent agency National Aeronautics and Space Administration (NASA)
Website www.nas.nasa.gov
Current Supercomputing Systems
Pleiades SGI/HPE ICE X supercluster
Aitken [1] HPE E-Cell system
Electra [2] SGI/HPE ICE X & HPE E-Cell system
Endeavour SGI UV shared-memory system
Merope [3] SGI Altix supercluster

The NASA Advanced Supercomputing (NAS) Division is located at NASA Ames Research Center, Moffett Field in the heart of Silicon Valley in Mountain View, California. It has been the major supercomputing and modeling and simulation resource for NASA missions in aerodynamics, space exploration, studies in weather patterns and ocean currents, and space shuttle and aircraft design and development for almost forty years.

Contents

The facility currently houses the petascale Pleiades, Aitken, and Electra supercomputers, as well as the terascale Endeavour supercomputer. The systems are based on SGI and HPE architecture with Intel processors. The main building also houses disk and archival tape storage systems with a capacity of over an exabyte of data, the hyperwall visualization system, and one of the largest InfiniBand network fabrics in the world. [4] The NAS Division is part of NASA's Exploration Technology Directorate and operates NASA's High-End Computing Capability (HECC) Project. [5]

History

Founding

In the mid-1970s, a group of aerospace engineers at Ames Research Center began to look into transferring aerospace research and development from costly and time-consuming wind tunnel testing to simulation-based design and engineering using computational fluid dynamics (CFD) models on supercomputers more powerful than those commercially available at the time. This endeavor was later named the Numerical Aerodynamic Simulator (NAS) Project and the first computer was installed at the Central Computing Facility at Ames Research Center in 1984.

Groundbreaking on a state-of-the-art supercomputing facility took place on March 14, 1985 in order to construct a building where CFD experts, computer scientists, visualization specialists, and network and storage engineers could be under one roof in a collaborative environment. In 1986, NAS transitioned into a full-fledged NASA division and in 1987, NAS staff and equipment, including a second supercomputer, a Cray-2 named Navier, were relocated to the new facility, which was dedicated on March 9, 1987. [6]

In 1995, NAS changed its name to the Numerical Aerospace Simulation Division, and in 2001 to the name it has today.

Industry leading innovations

NAS has been one of the leading innovators in the supercomputing world, developing many tools and processes that became widely used in commercial supercomputing. Some of these firsts include: [7]

An image of the flowfield around the Space Shuttle Launch Vehicle traveling at Mach 2.46 and at an altitude of 66,000 feet (20,000 m). The surface of the vehicle is colored by the pressure coefficient, and the gray contours represent the density of the surrounding air, as calculated using the OVERFLOW code. SSLV ascent.jpg
An image of the flowfield around the Space Shuttle Launch Vehicle traveling at Mach 2.46 and at an altitude of 66,000 feet (20,000 m). The surface of the vehicle is colored by the pressure coefficient, and the gray contours represent the density of the surrounding air, as calculated using the OVERFLOW code.

Software development

NAS develops and adapts software in order to "complement and enhance the work performed on its supercomputers, including software for systems support, monitoring systems, security, and scientific visualization," and often provides this software to its users through the NASA Open Source Agreement (NOSA). [9]

A few of the important software developments from NAS include:

Supercomputing history

Since its construction in 1987, the NASA Advanced Supercomputing Facility has housed and operated some of the most powerful supercomputers in the world. Many of these computers include testbed systems built to test new architecture, hardware, or networking set-ups that might be utilized on a larger scale. [6] [8] Peak performance is shown in Floating Point Operations Per Second (FLOPS).

Computer NameArchitecturePeak PerformanceNumber of CPUsInstallation Date
Cray XMP-12 210.53 megaflops11984
Navier Cray 2 1.95 gigaflops41985
Chuck Convex 3820 1.9 gigaflops81987
Pierre Thinking Machines CM2 14.34 gigaflops16,0001987
43 gigaflops48,0001991
StokesCray 21.95 gigaflops41988
PiperCDC/ETA-10Q840 megaflops41988
Reynolds Cray Y-MP 2.54 gigaflops81988
2.67 gigaflops881988
Lagrange Intel iPSC/860 7.88 gigaflops1281990
GammaIntel iPSC/8607.68 gigaflops1281990
von KarmanConvex 3240200 megaflops41991
BoltzmannThinking Machines CM516.38 gigaflops1281993
Sigma Intel Paragon 15.60 gigaflops2081993
von Neumann Cray C90 15.36 gigaflops161993
EagleCray C907.68 gigaflops81993
GraceIntel Paragon15.6 gigaflops2091993
Babbage IBM SP-2 34.05 gigaflops1281994
42.56 gigaflops1601994
da Vinci SGI Power Challenge 161994
SGI Power Challenge XL11.52 gigaflops321995
Newton Cray J90 7.2 gigaflops361996
Piglet SGI Origin 2000/250 MHz 4 gigaflops81997
TuringSGI Origin 2000/195 MHz9.36 gigaflops241997
25 gigaflops641997
FermiSGI Origin 2000/195 MHz3.12 gigaflops81997
HopperSGI Origin 2000/250 MHz32 gigaflops641997
EvelynSGI Origin 2000/250 MHz4 gigaflops81997
StegerSGI Origin 2000/250 MHz64 gigaflops1281997
128 gigaflops2561998
LomaxSGI Origin 2800/300 MHz307.2 gigaflops5121999
409.6 gigaflops5122000
LouSGI Origin 2000/250 MHz4.68 gigaflops121999
ArielSGI Origin 2000/250 MHz4 gigaflops82000
SebastianSGI Origin 2000/250 MHz4 gigaflops82000
SN1-512 SGI Origin 3000/400 MHz 409.6 gigaflops5122001
Bright Cray SVe1/500 MHz 64 gigaflops322001
ChapmanSGI Origin 3800/400 MHz819.2 gigaflops1,0242001
1.23 teraflops1,0242002
Lomax IISGI Origin 3800/400 MHz409.6 gigaflops5122002
Kalpana [14] SGI Altix 3000 [15] 2.66 teraflops5122003
Cray X1 [16] 204.8 gigaflops2004
Columbia SGI Altix 3000 [17] 63 teraflops10,2402004
SGI Altix 4700 10,2962006
85.8 teraflops [18] 13,8242007
Schirra IBM POWER5+ [19] 4.8 teraflops6402007
RT Jones SGI ICE 8200, Intel Xeon "Harpertown" Processors 43.5 teraflops4,0962007
Pleiades SGI ICE 8200, Intel Xeon "Harpertown" Processors [20] 487 teraflops51,2002008
544 teraflops [21] 56,3202009
SGI ICE 8200, Intel Xeon "Harpertown"/"Nehalem" Processors [22] 773 teraflops81,9202010
SGI ICE 8200/8400, Intel Xeon "Harpertown"/"Nehalem"/"Westmere" Processors [23] 1.09 petaflops111,1042011
SGI ICE 8200/8400/X, Intel Xeon "Harpertown"/"Nehalem"/"Westmere"/"Sandy Bridge" Processors [24] 1.24 petaflops125,9802012
SGI ICE 8200/8400/X, Intel Xeon "Nehalem"/"Westmere"/"Sandy Bridge"/"Ivy Bridge" Processors [25] 2.87 petaflops162,4962013
3.59 petaflops184,8002014
SGI ICE 8400/X, Intel Xeon "Westmere"/"Sandy Bridge"/"Ivy Bridge"/"Haswell" Processors [26] 4.49 petaflops198,4322014
5.35 petaflops [27] 210,3362015
SGI ICE X, Intel Xeon "Sandy Bridge"/"Ivy Bridge"/"Haswell"/"Broadwell" Processors [28] 7.25 petaflops246,0482016
Endeavour SGI UV 2000, Intel Xeon "Sandy Bridge" Processors [29] 32 teraflops1,5362013
Merope SGI ICE 8200, Intel Xeon "Harpertown" Processors [25] 61 teraflops5,1202013
SGI ICE 8400, Intel Xeon "Nehalem"/"Westmere" Processors [26] 141 teraflops1,1522014
ElectraSGI ICE X, Intel Xeon "Broadwell" Processors [30] 1.9 petaflops1,1522016
SGI ICE X/HPE SGI 8600 E-Cell, Intel Xeon "Broadwell"/"Skylake" Processors [31] 4.79 petaflops2,3042017
8.32 petaflops [32] 3,4562018
AitkenHPE SGI 8600 E-Cell, Intel Xeon "Cascade Lake" Processors [33] 3.69 petaflops1,1502019
Computer NameArchitecturePeak PerformanceNumber of CPUsInstallation Date


Storage resources

Disk storage

In 1987, NAS partnered with the Defense Advanced Research Projects Agency (DARPA) and the University of California, Berkeley in the Redundant Array of Inexpensive Disks (RAID) project, which sought to create a storage technology that combined multiple disk drive components into one logical unit. Completed in 1992, the RAID project lead to the distributed data storage technology used today. [6]

The NAS facility currently houses disk mass storage on an SGI parallel DMF cluster with high-availability software consisting of four 32-processor front-end systems, which are connected to the supercomputers and the archival tape storage system. The system has 192 GB of memory per front-end [34] and 7.6 petabytes (PB) of disk cache. [4] Data stored on disk is regularly migrated to the tape archival storage systems at the facility to free up space for other user projects being run on the supercomputers.

Archive and storage systems

In 1987, NAS developed the first UNIX-based hierarchical mass storage system, named NAStore. It contained two StorageTek 4400 cartridge tape robots, each with a storage capacity of approximately 1.1 terabytes, cutting tape retrieval time from 4 minutes to 15 seconds. [6]

With the installation of the Pleiades supercomputer in 2008, the StorageTek systems that NAS had been using for 20 years were unable to meet the needs of the greater number of users and increasing file sizes of each project's datasets. [35] In 2009, NAS brought in Spectra Logic T950 robotic tape systems which increased the maximum capacity at the facility to 16 petabytes of space available for users to archive their data from the supercomputers. [36] As of March 2019, the NAS facility increased the total archival storage capacity of the Spectra Logic tape libraries to 1,048 petabytes (or 1 exabyte) with 35% compression. [34] SGI's Data Migration Facility (DMF) and OpenVault manage disk-to-tape data migration and tape-to-disk de-migration for the NAS facility.

As of March 2019, there is over 110 petabytes of unique data stored in the NAS archival storage system. [34]

Data visualization systems

In 1984, NAS purchased 25 SGI IRIS 1000 graphics terminals, the beginning of their long partnership with the Silicon Valley–based company, which made a significant impact on post-processing and visualization of CFD results run on the supercomputers at the facility. [6] Visualization became a key process in the analysis of simulation data run on the supercomputers, allowing engineers and scientists to view their results spatially and in ways that allowed for a greater understanding of the CFD forces at work in their designs.

NASA Hyperwall 2.jpg
Hyperwall-2.jpg
The hyperwall visualization system at the NAS facility allows researchers to view multiple simulations run on the supercomputers, or a single large image or animation.

The hyperwall

In 2002, NAS visualization experts developed a visualization system called the "hyperwall" which included 49 linked LCD panels that allowed scientists to view complex datasets on a large, dynamic seven-by-seven screen array. Each screen had its own processing power, allowing each one to display, process, and share datasets so that a single image could be displayed across all screens or configured so that data could be displayed in "cells" like a giant visual spreadsheet. [37]

The second generation "hyperwall-2" was developed in 2008 by NAS in partnership with Colfax International and is made up of 128 LCD screens arranged in an 8x16 grid 23 feet wide by 10 feet tall. It is capable of rendering one quarter billion pixels, making it the highest resolution scientific visualization system in the world. [38] It contains 128 nodes, each with two quad-core AMD Opteron (Barcelona) processors and a Nvidia GeForce 480 GTX graphics processing unit (GPU) for a dedicated peak processing power of 128 teraflops across the entire system—100 times more powerful than the original hyperwall. [39] The hyperwall-2 is directly connected to the Pleiades supercomputer's filesystem over an InfiniBand network, which allows the system to read data directly from the filesystem without needing to copy files onto the hyperwall-2's memory.

In 2014, the hyperwall was upgraded with new hardware: 256 Intel Xeon "Ivy Bridge" processors and 128 NVIDIA Geforce 780 Ti GPUs. The upgrade increased the system's peak processing power from 9 teraflops to 57 teraflops, and now has nearly 400 gigabytes of graphics memory. [40]

In 2020, the hyperwall was further upgraded with new hardware: 256 Intel Xeon Platinum 8268 (Cascade Lake) processors and 128 NVIDIA Quadro RTX 6000 GPUs with a total of 3.1 terabytes of graphics memory. The upgrade increased the system's peak processing power from 57 teraflops to 512 teraflops. [41]

Concurrent visualization

An important feature of the hyperwall technology developed at NAS is that it allows for "concurrent visualization" of data, which enables scientists and engineers to analyze and interpret data while the calculations are running on the supercomputers. Not only does this show the current state of the calculation for runtime monitoring, steering, and termination, but it also "allows higher temporal resolution visualization compared to post-processing because I/O and storage space requirements are largely obviated... [and] may show features in a simulation that would otherwise not be visible." [42]

The NAS visualization team developed a configurable concurrent pipeline for use with a massively parallel forecast model run on the Columbia supercomputer in 2005 to help predict the Atlantic hurricane season for the National Hurricane Center. Because of the deadlines to submit each of the forecasts, it was important that the visualization process would not significantly impede the simulation or cause it to fail.

Related Research Articles

<span class="mw-page-title-main">Silicon Graphics</span> 1981–2009 American computing company

Silicon Graphics, Inc. was an American high-performance computing manufacturer, producing computer hardware and software. Founded in Mountain View, California, in November 1981 by James Clark, its initial market was 3D graphics computer workstations, but its products, strategies and market positions developed significantly over time.

<span class="mw-page-title-main">ASCI Red</span> Supercomputer

ASCI Red was the first computer built under the Accelerated Strategic Computing Initiative (ASCI), the supercomputing initiative of the United States government created to help the maintenance of the United States nuclear arsenal after the 1992 moratorium on nuclear testing.

<span class="mw-page-title-main">Columbia (supercomputer)</span> NASA Supercomputer

Columbia was a supercomputer built by Silicon Graphics (SGI) for the National Aeronautics and Space Administration (NASA), installed in 2004 at the NASA Advanced Supercomputing (NAS) facility located at Moffett Field in California. Named in honor of the crew who died in the Space Shuttle Columbia disaster, it increased NASA's supercomputing capacity ten-fold for the agency's science, aeronautics and exploration programs.

<span class="mw-page-title-main">TeraGrid</span>

TeraGrid was an e-Science grid computing infrastructure combining resources at eleven partner sites. The project started in 2001 and operated from 2004 through 2011.

<span class="mw-page-title-main">Kalpana (supercomputer)</span> Supercomputer operated by the NASA Advanced Supercomputing (NAS)

Kalpana was a supercomputer at NASA Ames Research Center operated by the NASA Advanced Supercomputing (NAS) Division and named in honor of astronaut Kalpana Chawla, who was killed in the Space Shuttle Columbia disaster and had worked as an engineer at Ames Research Center prior to joining the Space Shuttle program. It was built in late 2003 and dedicated on May 12, 2004.

<span class="mw-page-title-main">Cray T3E</span>

The Cray T3E was Cray Research's second-generation massively parallel supercomputer architecture, launched in late November 1995. The first T3E was installed at the Pittsburgh Supercomputing Center in 1996. Like the previous Cray T3D, it was a fully distributed memory machine using a 3D torus topology interconnection network. The T3E initially used the DEC Alpha 21164 (EV5) microprocessor and was designed to scale from 8 to 2,176 Processing Elements (PEs). Each PE had between 64 MB and 2 GB of DRAM and a 6-way interconnect router with a payload bandwidth of 480 MB/s in each direction. Unlike many other MPP systems, including the T3D, the T3E was fully self-hosted and ran the UNICOS/mk distributed operating system with a GigaRing I/O subsystem integrated into the torus for network, disk and tape I/O.

The Pittsburgh Supercomputing Center (PSC) is a high performance computing and networking center founded in 1986 and one of the original five NSF Supercomputing Centers. PSC is a joint effort of Carnegie Mellon University and the University of Pittsburgh in Pittsburgh, Pennsylvania, United States.

The Texas Advanced Computing Center (TACC) at the University of Texas at Austin, United States, is an advanced computing research center that is based on comprehensive advanced computing resources and supports services to researchers in Texas and across the U.S. The mission of TACC is to enable discoveries that advance science and society through the application of advanced computing technologies. Specializing in high-performance computing, scientific visualization, data analysis & storage systems, software, research & development, and portal interfaces, TACC deploys and operates advanced computational infrastructure to enable the research activities of faculty, staff, and students of UT Austin. TACC also provides consulting, technical documentation, and training to support researchers who use these resources. TACC staff members conduct research and development in applications and algorithms, computing systems design/architecture, and programming tools and environments.

Advanced Numerical Research and Analysis Group (ANURAG) is a laboratory of the Defence Research and Development Organisation (DRDO). Located in Kanchanbagh, Hyderabad, it is involved in the development of computing solutions for numerical analysis and their use in other DRDO projects.

High Performance Storage System (HPSS) is a flexible, scalable, policy-based, software-defined Hierarchical Storage Management product developed by the HPSS Collaboration. It provides scalable hierarchical storage management (HSM), archive, and file system services using cluster, LAN and SAN technologies to aggregate the capacity and performance of many computers, disks, disk systems, tape drives, and tape libraries.

<span class="mw-page-title-main">Pleiades (supercomputer)</span> NASA supercomputer at Ames Research Center/NAS

Pleiades is a petascale supercomputer housed at the NASA Advanced Supercomputing (NAS) facility at NASA's Ames Research Center located at Moffett Field near Mountain View, California. It is maintained by NASA and partners Hewlett Packard Enterprise and Intel.

The National Center for Computational Sciences (NCCS) is a United States Department of Energy (DOE) Leadership Computing Facility that houses the Oak Ridge Leadership Computing Facility (OLCF), a DOE Office of Science User Facility charged with helping researchers solve challenging scientific problems of global interest with a combination of leading high-performance computing (HPC) resources and international expertise in scientific computing.

The National Institute for Computational Sciences (NICS) is funded by the National Science Foundation and managed by the University of Tennessee. NICS was home to Kraken, the most powerful computer in the world managed by academia. The NICS petascale scientific computing environment is housed at Oak Ridge National Laboratory (ORNL), home to the world's most powerful computing complex. The mission of NICS, a member of the Extreme Science and Engineering Discovery Environment (XSEDE - formerly TeraGrid), is to enable the scientific discoveries of researchers nationwide by providing leading-edge computational resources, together with support for their effective use, and leveraging extensive partnership opportunities.

<span class="mw-page-title-main">Tianhe-1</span> Supercomputer

Tianhe-I, Tianhe-1, or TH-1 is a supercomputer capable of an Rmax of 2.5 peta FLOPS. Located at the National Supercomputing Center of Tianjin, China, it was the fastest computer in the world from October 2010 to June 2011 and was one of the few petascale supercomputers in the world.

<span class="mw-page-title-main">National Computer Center for Higher Education (France)</span>

The National Computer Center for Higher Education, based in Montpellier, is a public institution under the supervision of the Ministry of Higher Education and Research (MESR) created by a decree issued in 1999. CINES offers IT services for public research in France. It is one of the major national centers for computing power supply for research in France.

<span class="mw-page-title-main">Supercomputing in Europe</span> Overview of supercomputing in Europe

Several centers for supercomputing exist across Europe, and distributed access to them is coordinated by European initiatives to facilitate high-performance computing. One such initiative, the HPC Europa project, fits within the Distributed European Infrastructure for Supercomputing Applications (DEISA), which was formed in 2002 as a consortium of eleven supercomputing centers from seven European countries. Operating within the CORDIS framework, HPC Europa aims to provide access to supercomputers across Europe.

<span class="mw-page-title-main">Yellowstone (supercomputer)</span>

Yellowstone was the inaugural supercomputer at the NCAR-Wyoming Supercomputing Center (NWSC) in Cheyenne, Wyoming. It was installed, tested, and readied for production in the summer of 2012. The Yellowstone supercomputing cluster was decommissioned on December 31, 2017, being replaced by its successor Cheyenne.

<span class="mw-page-title-main">Endeavour (supercomputer)</span>

Endeavour is a shared memory supercomputer at the NASA Advanced Supercomputing (NAS) Division at NASA Ames Research Center. It was named after the Space Shuttle Endeavour, the last orbiter built during NASA's Space Shuttle Program.

<span class="mw-page-title-main">NCAR-Wyoming Supercomputing Center</span> High performance computing center in Wyoming, US

The NCAR-Wyoming Supercomputing Center (NWSC) is a high-performance computing (HPC) and data archival facility located in Cheyenne, Wyoming, that provides advanced computing services to researchers in the Earth system sciences.

The Cheyenne supercomputer at the NCAR-Wyoming Supercomputing Center (NWSC) in Cheyenne, Wyoming operated for seven years as one of the world’s most powerful and energy-efficient computers from 2017 to 2024. Ranked in November 2016 as the 20th most powerful computer in the world and November 2023 as 160th by Top500, the 5.34-petaflops system is capable of more than triple the amount of scientific computing performed by NCAR’s previous supercomputer, Yellowstone. It also is three times more energy efficient than Yellowstone, with a peak computation rate of more than 3 billion calculations per second for every watt of energy consumed.

References

  1. "Aitken Supercomputer homepage". NAS.
  2. "Electra Supercomputer homepage". NAS.
  3. "Merope Supercomputer homepage". NAS.
  4. 1 2 "NASA Advanced Supercomputing Division: Advanced Computing" (PDF). NAS. 2019.
  5. "NAS Homepage - About the NAS Division". NAS.
  6. 1 2 3 4 5 6 7 "NASA Advanced Supercomputing Division 25th Anniversary Brochure (PDF)" (PDF). NAS. Archived from the original (PDF) on 2013-03-02.
  7. "NAS homepage: Division History". NAS.
  8. 1 2 "NAS High-Performance Computer History". Gridpoints: 1A–12A. Spring 2002.
  9. "NAS Software and Datasets". NAS.
  10. "NASA Flow Analysis Software Toolkit". NASA.
  11. "NASA Cart3D Homepage". Archived from the original on 2002-06-02.
  12. "NASA.gov". Archived from the original on 2023-01-17. Retrieved 2024-05-21.
  13. "NASA.gov" (PDF).
  14. "NASA to Name Supercomputer After Columbia Astronaut". NAS. May 2005. Archived from the original on 2013-03-17. Retrieved 2014-03-07.
  15. "NASA Ames Installs World's First Alitx 512-Processor Supercomputer". NAS. November 2003. Archived from the original on 2013-03-17. Retrieved 2014-03-07.
  16. "New Cray X1 System Arrives at NAS". NAS. April 2004.
  17. "NASA Unveils Its Newest, Most Powerful Supercomputer". NASA. October 2004. Archived from the original on 2004-10-28. Retrieved 2014-03-07.
  18. "Columbia Supercomputer Legacy homepage". NASA.
  19. "NASA Selects IBM for Next-Generation Supercomputing Applications". NASA. June 2007.
  20. "NASA Supercomputer Ranks Among World's Fastest – November 2008". NASA. November 2008. Archived from the original on 2019-08-25. Retrieved 2014-03-07.
  21. "'Live' Integration of Pleiades Rack Saves 2 Million Hours". NAS. February 2010. Archived from the original on 2013-03-16. Retrieved 2014-03-07.
  22. "NASA Supercomputer Doubles Capacity, Increases Efficiency". NASA. June 2010. Archived from the original on 2019-08-25. Retrieved 2014-03-07.
  23. "NASA's Pleiades Supercomputer Ranks Among World's Fastest". NASA. June 2011. Archived from the original on 2011-10-21. Retrieved 2014-03-07.
  24. "Pleiades Supercomputer Gets a Little More Oomph". NASA. June 2012.
  25. 1 2 "NASA's Pleiades Supercomputer Upgraded, Harpertown Nodes Repurposed". NAS. August 2013. Archived from the original on 2019-08-25. Retrieved 2014-03-07.
  26. 1 2 "NASA's Pleiades Supercomputer Upgraded, Gets One Petaflops Boost". NAS. October 2014. Archived from the original on 2019-08-25. Retrieved 2014-12-29.
  27. "Pleiades Supercomputer Performance Leaps to 5.35 Petaflops with Latest Expansion". NAS. January 2015.
  28. "Pleiades Supercomputer Peak Performance Increased, Long-Term Storage Capacity Tripled". NAS. July 2016. Archived from the original on 2019-06-19. Retrieved 2020-03-05.
  29. "Endeavour Supercomputer Resource homepage". NAS.
  30. "NASA Ames Kicks off Pathfinding Modular Supercomputing Facility". NAS. February 2017.
  31. "Recently Expanded, NASA's First Modular Supercomputer Ranks 15th in the U.S. on TOP500 List". NAS. November 2017.
  32. "NASA's Electra Supercomputer Rises to 12th Place in the U.S. on the TOP500 List". NAS. November 2018.
  33. "NASA Advanced Supercomputing Division: Modular Supercomputing" (PDF). NAS. 2019.
  34. 1 2 3 "HECC Archival Storage System Resource homepage". NAS.
  35. "NAS Silo, Tape Drive, and Storage Upgrades - SC09" (PDF). NAS. November 2009.
  36. "New NAS Data Archive System Installation Completed". NAS. 2009.
  37. "Mars Flyer Debuts on Hyperwall". NAS. September 2003.
  38. "NASA Develops World's Highest Resolution Visualization System". NAS. June 2008.
  39. "NAS Visualization Systems Overview". NAS.
  40. "NAS hyperwall Visualization System Upgraded with Ivy Bridge Nodes". NAS. October 2014.
  41. "NAS Visualization Systems: hyperwall". NAS. December 2020.
  42. Ellsworth, David; Bryan Green; Chris Henze; Patrick Moran; Timothy Sandstrom (September–October 2006). "Concurrent Visualization in a Production Supercomputing Environment" (PDF). IEEE Transactions on Visualization and Computer Graphics. 12 (5): 997–1004. doi:10.1109/TVCG.2006.128. PMID   17080827. S2CID   14037933.

NASA Advanced Supercomputing Resources

Other Online Resources