Texas Advanced Computing Center

Last updated

The Texas Advanced Computing Center (TACC) at the University of Texas at Austin, United States, is an advanced computing research center that is based on comprehensive advanced computing resources and supports services to researchers in Texas and across the U.S. The mission of TACC is to enable discoveries that advance science and society through the application of advanced computing technologies. Specializing in high performance computing, scientific visualization, data analysis & storage systems, software, research & development and portal interfaces, TACC deploys and operates advanced computational infrastructure to enable the research activities of faculty, staff, and students of UT Austin. TACC also provides consulting, technical documentation, and training to support researchers who use these resources. TACC staff members conduct research and development in applications and algorithms, computing systems design/architecture, and programming tools and environments.

Contents

Founded in 2001, TACC is one of the centers of computational excellence in the United States. Through the National Science Foundation (NSF) Extreme Science and Engineering Discovery Environment (XSEDE) project, TACC’s resources and services are made available to the national academic research community. TACC is located on UT's J. J. Pickle Research Campus.

TACC collaborators include researchers in other UT Austin departments and centers, at Texas universities in the High Performance Computing Across Texas Consortium, and at other U.S. universities and government laboratories.

Visualization Lab TACC Visualization Lab (9807741915).jpg
Visualization Lab

Projects

TACC research and development activities are supported by several federal programs, including:

NSF XSEDE (formerly Teragrid) Program

Funded by the National Science Foundation (NSF), XSEDE is a virtual system that scientists can use to interactively share computing resources, data, and expertise. XSEDE is the most powerful and robust collection of integrated advanced digital resources and services in the world. TACC is one of the leading partners in the XSEDE project, whose resources include more than one petaflop of computing capability and more than 30 petabytes of online and archival data storage. As part of the project, TACC provides access to Ranger, Lonestar, Longhorn, Spur, and Ranch through XSEDE quarterly allocations. TACC staff members support XSEDE researchers nationwide, and perform research and development to make XSEDE more effective and impactful. The XSEDE partnership also includes: University of Illinois at Urbana-Champaign, Carnegie Mellon University/University of Pittsburgh, University of Texas at Austin, University of Tennessee Knoxville, University of Virginia, Shodor Education Foundation, Southeastern Universities Research Association, University of Chicago, University of California San Diego, Indiana University, Jülich Supercomputing Centre, Purdue University, Cornell University, Ohio State University, University of California Berkeley, Rice University, and the National Center for Atmospheric Research. It is led by the University of Illinois's National Center for Supercomputing Applications. XSEDE concluded formal operations as a National Science Foundation (NSF) funded project on August 31, 2022. Similar services are now operated through NSF’s follow-on program, Advanced Cyberinfrastructure Coordination Ecosystem: Services & Support, or ACCESS.

University of Texas Research Cyberinfrastructure (UTRC) Project

The UT System Research Cyberinfrastructure Project (UTRC) is an initiative that allows researchers at all 15 UT System institutions to access advanced computing research infrastructure. As part of the UTRC, UT system researchers have unique access to TACC resources including TACC’s Lonestar, a national XSEDE resource, and Corral, a high-performance storage system for all types of digital data.

iPlant Collaborative

The iPlant Collaborative is a 5-year, 50 million dollar NSF project (awarded in 2008) that uses new computational science and cyberinfrastructure solutions to address challenges in the plant sciences. iPlant integrates high-performance petascale storage, federated identity management, on-demand virtualization, and distributed computing across XSEDE sites behind a set of REST APIs. These serve as the basis for presenting community-extensible rich web clients that enable the plant science community to perform sophisticated bioinformatics analyses across a variety of conceptual domains. In September 2013 it was announced that the NSF had renewed iPlant’s funding for a second 5-year term with an expansion of scope to all non-human life science research.

STAR Partners Program

The Science and Technology Affiliates for Research Program offers opportunities for companies to increase their effectiveness through utilizing TACC’s computing technologies. Current STAR partners include corporations BP, Chevron, Dell, Green Revolution Cooling, Intel, and Technip.

Digital Rocks Portal

A sustainable, open and easy-to-use repository that organizes the images and related experimental measurements of diverse porous materials, improves access of porous media analysis results to a wider community of geosciences and engineering researchers not necessarily trained in computer science or data analysis, and enhances productivity, scientific inquiry, and engineering decisions founded on a data-driven basis.

Supercomputer clusters

Stampede

Stampede was one of the most powerful machines in the world for open science research. Funded by the National Science Foundation Grant ACI-1134872 and built in partnership with Intel, Dell and Mellanox, Stampede was stood up in September 2012, and brought online on January 7, 2013. Stampede comprised 6400 nodes, 102,400 CPU cores, 205 TB total memory, 14 PB total and 1.6 PB local storage. The bulk of the cluster consisted of 160 racks of primary compute nodes, each with dual Xeon E5-2680 8-core processors, a Xeon Phi coprocessor, and 32 GB ram. [1] The cluster also contained 16 nodes with 32 cores and 1 TB ram each, 128 "standard" compute nodes with Nvidia Kepler K20 GPUs, and other nodes for I/O (to a Lustre filesystem), login, and cluster management. [2] Stampede could complete 9.6 quadrillion floating point operations per second.

A pre-production configuration of Stampede [3] was listed as the 7th fastest supercomputer on the November 2012 Top500 list with a delivered performance of 2660 TFlops. Because the system was still being assembled, the submitted benchmark was run using 1875 nodes with Xeon Phi coprocessors and 3900 nodes without Xeon Phi coprocessors. [4] For the June 2013 Top500 list, the benchmark was re-run using 6006 nodes (all with Xeon Phi coprocessors), delivering 5168 TFlops and moving the system up to 6th place. The benchmark was not re-run for the November 2013 Top500 list and Stampede dropped back to the 7th position.

In its first year of production, Stampede completed 2,196,848 jobs by 3,400 researchers, performing more than 75,000 years of scientific computations.

In 2019, following the decommissioning of Stampede, the United States Federal Reserve took ownership of a significant chunk of Stampede, operating it as a cluster called BigTex, used for large scale financial analysis. [5] Another large chunk of Stampede was repurposed and used for Stampede2, the successor to Stampede, utilizing socketed Xeon Phi 'Knights Landing' processors, rather than the PCIe card 'Knights Corner' add in cards that Stampede consisted of. [6]

Maverick

Maverick, TACC's latest addition to its suite of advanced computing systems, combines capacities for interactive advanced visualization and large-scale data analytics as well as traditional high performance computing. Recent exponential increases in the size and quantity of digital datasets necessitate new systems such as Maverick, capable of fast data movement and advanced statistical analysis. Maverick debuts the new NVIDIA K40 GPU for remote visualization and GPU computing to the national community.

Visualization:

Data:

Lonestar

Lonestar, a powerful, multi-use cyberinfrastructure HPC and remote visualization resource, is the name of a series of HPC cluster systems at TACC.

The first Lonestar system was built by Dell and integrated by Cray, using Dell PowerEdge 1750 servers and Myrinet interconnects, with a peak performance of 3672 gigaFlops. An upgrade in 2004 increased the number of processors to 1024 and the peak rate of 6338 gigaflops. The second iteration (Lonestar 2) in 2006 was deployed with Dell PowerEdge 1855 servers and Infiniband. (1300 processors, 2000 gigabytes memory, peak performance 8320 gigaflops.) Later that year, the cluster's third iteration was built from Dell PowerEdge 1955 servers; it was composed of 5200 processors and 10.4 TB memory. Lonestar 3 entered the Top500 list in November 2006 as the 12th fastest supercomputer, with 55.5 TFlops peak. [7]

In April 2011, TACC announced another upgrade of the Lonestar cluster. The $12 million Lonestar 4 cluster replaced its predecessor with 1,888 Dell M610 PowerEdge blade servers, each with two six-core Intel Xeon 5600 processors (22,656 total cores). The system storage includes a 1000TB parallel (SCRATCH) Lustre file system, and 276TB of local compute-node disk space (146GB/node). Lonestar also provides access to five large memory (1TB) nodes, and eight nodes containing two NVIDIA GPU's, giving users access to high-throughput computing and remote visualization capabilities respectively. Lonestar 4 [8] entered the Top500 list in June 2011 as the 28th fastest supercomputer, with 301.8 TFlops peak.

The Top500 rankings of various iterations of the Lonestar cluster are listed in TACC's submissions to the Top500. [9]

Ranch

TACC's long-term mass storage solution is an Oracle StorageTek Modular Library System, named Ranch. Ranch utilizes Oracle's Sun Storage Archive Manager Filesystem (SAM-FS) for migrating files to/from a tape archival system with a current offline storage capacity of 40 PB. Ranch's disk cache is built on Oracle's Sun ST6540 and DataDirect Networks 9550 disk arrays containing approximately 110 TB of usable spinning disk storage. These disk arrays are controlled by an Oracle Sun x4600 SAM-FS Metadata server which has 16 CPUs and 32 GB of RAM.

Corral

Deployed in April 2009 by the Texas Advanced Computing Center to support data-centric science at the University of Texas, Corral consists of 6 Petabytes of online disk and a number of servers providing high-performance storage for all types of digital data. It supports MySQL and Postgres databases, high-performance parallel file system, and web-based access, and other network protocols for storage and retrieval of data to and from sophisticated instruments, HPC simulations, and visualization laboratories.

Visualization resources

To support the research being performed on our high performance computing systems, TACC provides advanced visualization resources and consulting services, which are accessible both in-person and remotely. These resources encompass both hardware and software, and include: Stallion, among the highest resolution tiled displays in the world; Longhorn, the largest hardware accelerated, remote, interactive visualization cluster; and the Longhorn Visualization Portal, an internet gateway to the Longhorn cluster and an easy-to-use interface for scientific visualization.

Visualization Laboratory

The TACC Visualization Laboratory, located in POB 2.404a, is open to all UT faculty, students and staff, as well as UT Systems users. The Vislab includes 'Stallion', one of the highest resolution tiled displays in the world (see below); 'Lasso', a 12.4 megapixel collaborative multi-touch display; 'Bronco', a Sony 4D SRX-S105 overhead projector and flat screen area that gives users a 20 ft. x 11 ft., 4096 x 2160 resolution display, which is driven by a high-end Dell workstation and is ideal for ultra-high-resolution visualizations and presentations; 'Horseshoes', four high-end Dell Precision systems, equipped with Intel multi-core processors and NVIDIA graphics technology for use in graphics production, visualization, and video editing; 'Saddle', a conference and small meeting room equipped with commercial audio and video capabilities to enable full HD videoconferencing; 'Mustang' and 'Silver' are stereoscopic visualization displays that are equipped with the latest technology using Samsung's 240 Hz stereo output modes in conjunction with 55" LED display panel and can be used to render depth as a result of the parallax generated by active and passive stereoscopic technologies; Mellanox FDR InfiniBand networking technologies to connect these systems at higher speeds. The Vislab also serves as a research hub for human-computer interaction, tiled display software development, and visualization consulting.

Stallion

Stallion is a 328 megapixel tiled display system, with over 150 times the resolution of a standard HD display, it is among the highest pixel count displays in the world. The cluster provides users with the ability to display high-resolution visualizations on a large 16x5 tiled display of 30-inch Dell monitors. This configuration allows for an exploration of visualizations at an extremely high level of detail and quality compared to a typical moderate pixel count projector. The cluster allows users access to over 82GB of graphics memory, and 240 processing cores. This configuration enables the processing of datasets of a massive scale, and the interactive visualization of substantial geometries. A 36TB shared file system is available to enable the storage of tera-scale size datasets.

Related Research Articles

<span class="mw-page-title-main">National Center for Supercomputing Applications</span> Illinois-based applied supercomputing research organization

The National Center for Supercomputing Applications (NCSA) is a state-federal partnership to develop and deploy national-scale computer infrastructure that advances research, science and engineering based in the United States. NCSA operates as a unit of the University of Illinois Urbana-Champaign, and provides high-performance computing resources to researchers across the country. Support for NCSA comes from the National Science Foundation, the state of Illinois, the University of Illinois, business and industry partners, and other federal agencies.

<span class="mw-page-title-main">MareNostrum</span> Supercomputer in the Barcelona Supercomputing Center

MareNostrum is the main supercomputer in the Barcelona Supercomputing Center. It is the most powerful supercomputer in Spain, one of thirteen supercomputers in the Spanish Supercomputing Network and one of the seven supercomputers of the European infrastructure PRACE.

United States federal research funders use the term cyberinfrastructure to describe research environments that support advanced data acquisition, data storage, data management, data integration, data mining, data visualization and other computing and information processing services distributed over the Internet beyond the scope of a single institution. In scientific usage, cyberinfrastructure is a technological and sociological solution to the problem of efficiently connecting laboratories, data, computers, and people with the goal of enabling derivation of novel scientific theories and knowledge.

<span class="mw-page-title-main">TeraGrid</span>

TeraGrid was an e-Science grid computing infrastructure combining resources at eleven partner sites. The project started in 2001 and operated from 2004 through 2011.

The Pittsburgh Supercomputing Center (PSC) is a high performance computing and networking center founded in 1986 and one of the original five NSF Supercomputing Centers. PSC is a joint effort of Carnegie Mellon University and the University of Pittsburgh in Pittsburgh, Pennsylvania, United States.

Sun Constellation System is an open petascale computing environment introduced by Sun Microsystems in 2007.

<span class="mw-page-title-main">Pleiades (supercomputer)</span> NASA supercomputer at Ames Research Center/NAS

Pleiades is a petascale supercomputer housed at the NASA Advanced Supercomputing (NAS) facility at NASA's Ames Research Center located at Moffett Field near Mountain View, California. It is maintained by NASA and partners Hewlett Packard Enterprise and Intel.

The National Center for Computational Sciences (NCCS) is a United States Department of Energy (DOE) Leadership Computing Facility that houses the Oak Ridge Leadership Computing Facility (OLCF), a DOE Office of Science User Facility charged with helping researchers solve challenging scientific problems of global interest with a combination of leading high-performance computing (HPC) resources and international expertise in scientific computing.

System G is a cluster supercomputer at Virginia Tech consisting of 324 Apple Mac Pro computers with a total of 2592 processing cores. It was finished in November 2008 and ranked 279 in that month's edition of TOP500, running at 16.78 teraflops and peaking at 22.94 teraflops. It now runs at a "sustained (Linpack) performance of 22.8 TFlops". It transmits data between nodes over Gigabit Ethernet and 40Gbit/s Infiniband.

The National Institute for Computational Sciences (NICS) is funded by the National Science Foundation and managed by the University of Tennessee. NICS was home to Kraken, the most powerful computer in the world managed by academia. The NICS petascale scientific computing environment is housed at Oak Ridge National Laboratory (ORNL), home to the world's most powerful computing complex. The mission of NICS, a member of the Extreme Science and Engineering Discovery Environment (XSEDE - formerly TeraGrid), is to enable the scientific discoveries of researchers nationwide by providing leading-edge computational resources, together with support for their effective use, and leveraging extensive partnership opportunities.

<span class="mw-page-title-main">National Computational Infrastructure</span> HPC facility in Canberra, Australia

The National Computational Infrastructure is a high-performance computing and data services facility, located at the Australian National University (ANU) in Canberra, Australian Capital Territory. The NCI is supported by the Australian Government's National Collaborative Research Infrastructure Strategy (NCRIS), with operational funding provided through a formal collaboration incorporating CSIRO, the Bureau of Meteorology, the Australian National University, Geoscience Australia, the Australian Research Council, and a number of research intensive universities and medical research institutes.

<span class="mw-page-title-main">Supercomputing in Japan</span> Overview of supercomputing in Japan

Japan operates a number of centers for supercomputing which hold world records in speed, with the K computer becoming the world's fastest in June 2011. and Fugaku took the lead in June 2020, and furthered it, as of November 2020, to 3 times faster than number two computer.

<span class="mw-page-title-main">National Computer Center for Higher Education (France)</span>

The National Computer Center for Higher Education, based in Montpellier, is a public institution under the supervision of the Ministry of Higher Education and Research (MESR) created by a decree issued in 1999. CINES offers IT services for public research in France. It is one of the major national centers for computing power supply for research in France.

<span class="mw-page-title-main">Supercomputing in Europe</span> Overview of supercomputing in Europe

Several centers for supercomputing exist across Europe, and distributed access to them is coordinated by European initiatives to facilitate high-performance computing. One such initiative, the HPC Europa project, fits within the Distributed European Infrastructure for Supercomputing Applications (DEISA), which was formed in 2002 as a consortium of eleven supercomputing centers from seven European countries. Operating within the CORDIS framework, HPC Europa aims to provide access to supercomputers across Europe.

<span class="mw-page-title-main">Xeon Phi</span> Series of x86 manycore processors from Intel

Xeon Phi was a series of x86 manycore processors designed and made by Intel. It was intended for use in supercomputers, servers, and high-end workstations. Its architecture allowed use of standard programming languages and application programming interfaces (APIs) such as OpenMP.

<span class="mw-page-title-main">Appro</span> American technology company

Appro was a developer of supercomputing supporting High Performance Computing (HPC) markets focused on medium- to large-scale deployments. Appro was based in Milpitas, California with a computing center in Houston, Texas, and a manufacturing and support subsidiary in South Korea and Japan.

Carter is a supercomputer installed at Purdue University in the fall of 2011 in a partnership with Intel. The high-performance computing cluster is operated by Information Technology at Purdue (ITaP), the university's central information technology organization. ITaP also operates clusters named Steele built in 2008, Coates built in 2009, Rossmann built in 2010, and Hansen built in the summer of 2011. Carter was the fastest campus supercomputer in the U.S. outside a national center when built. It was one of the first clusters to employ Intel's second generation Xenon E-5 "Sandy Bridge" processor and ranked 54th on the November 2011 TOP500 list, making it Purdue's first Top 100-ranked research computing system.

<span class="mw-page-title-main">NCAR-Wyoming Supercomputing Center</span> High performance computing center in Wyoming, US

The NCAR-Wyoming Supercomputing Center (NWSC) is a high-performance computing (HPC) and data archival facility located in Cheyenne, Wyoming, that provides advanced computing services to researchers in the Earth system sciences.

<span class="mw-page-title-main">Cray XC40</span> Supercomputer manufactured by Cray

The Cray XC40 is a massively parallel multiprocessor supercomputer manufactured by Cray. It consists of Intel Haswell Xeon processors, with optional Nvidia Tesla or Intel Xeon Phi accelerators, connected together by Cray's proprietary "Aries" interconnect, stored in air-cooled or liquid-cooled cabinets. The XC series supercomputers are available with the Cray DataWarp applications I/O accelerator technology.

The Cheyenne supercomputer at the NCAR-Wyoming Supercomputing Center (NWSC) in Cheyenne, Wyoming began operation as one of the world’s most powerful and energy-efficient computers. Ranked in November 2016 as the 20th most powerful computer in the world by Top500, the 5.34-petaflops system is capable of more than triple the amount of scientific computing performed by NCAR’s previous supercomputer, Yellowstone. It also is three times more energy efficient than Yellowstone, with a peak computation rate of more than 3 billion calculations per second for every watt of energy consumed.

References

  1. November 2012, Patrick Kennedy 12. "Xeon Phi: Intel's Larrabee-Derived Card In TACC's Supercomputer". Tom's Hardware. Retrieved 2021-01-07.{{cite web}}: CS1 maint: numeric names: authors list (link)
  2. "Frontera System - Texas Advanced Computing Center". www.tacc.utexas.edu. Retrieved 2021-01-07.
  3. "Stampede - PowerEdge C8220, Xeon E5-2680 8C 2.700GHz, Infiniband FDR, Intel Xeon Phi SE10P - TOP500". top500.org. Retrieved 2021-01-07.
  4. Detailed system configuration information is provided by the Top500 web site, but only in the Excel file downloads, e.g., http://s.top500.org/static/lists/2012/11/TOP500_201211.xls.
  5. Iriarte, Mariana (2020-06-18). "Stampede1 Reborn as BigTex, a Supercomputer for the Federal Reserve". HPCwire. Retrieved 2022-07-09.
  6. "Stampede2 - Texas Advanced Computing Center". www.tacc.utexas.edu. Retrieved 2022-07-09.
  7. "Lonestar - PowerEdge 1955, 2.66 GHz, Infiniband - TOP500". top500.org. Retrieved 2021-01-07.
  8. "Lonestar 4 - Dell PowerEdge M610 Cluster, Xeon 5680 3.3Ghz, Infiniband QDR - TOP500". top500.org. Retrieved 2021-01-07.
  9. "Texas Advanced Computing Center/Univ. of Texas - TOP500". top500.org. Retrieved 2021-01-07.

30°23′25″N97°43′32″W / 30.390205°N 97.725652°W / 30.390205; -97.725652