The Holland Computing Center, often abbreviated HCC, is the high-performance computing core for the University of Nebraska System. HCC has locations in both the University of Nebraska-Lincoln June and Paul Schorr III Center for Computer Science & Engineering and the University of Nebraska Omaha Peter Kiewit Institute. [1] The center was named after Omaha businessman Richard Holland who donated considerably to the university for the project. [2]
Both locations provide various research computing services and hardware. The retrofitted facilities at the PKI location include the Crane Supercomputer [3] which “is used by scientists and engineers to study topics such as nanoscale chemistry, subatomic physics, meteorology, crashworthiness, artificial intelligence and bioinformatics” [4] and Anvil, the Holland Computing Center's "Cloud" based on the OpenStack Architecture. Other resources include "Rhino" for shared memory processing and "Red" for LHC grid computing.
The Crane Supercomputer is HCC's most powerful supercomputer and is used as the primary computational resource for many researchers within the University of Nebraska system across a variety of disciplines. When it was implemented in 2013, Crane was ranked 474 in the TOP500. [5] As of May 2019, Crane is composed of 548 nodes offering a total of 12,236 cores, 68,000 GB [ clarification needed ] of memory, and 57 Nvidia GPU's. Crane has 1.5 PB of available Lustre storage (1 PB = 1 million gigabytes).
In 2017, Crane received a major upgrade, adding nodes with the Omnipath InfiniBand Architecture.
Rhino is the latest addition to HCC's Resources, taking the place of the former Tusker super computer, using nodes from both Tusker and Sandhills. At its creation in June 2019, Rhino was composed of 112 nodes offering a total of 7,168 cores, 25,856 GB of memory. The cluster has 360 TB of Lustre storage available.
Red is the resource for the University of Nebraska-Lincoln's US CMS Tier-2 site. Initially created in August 2005, the cluster initially contained 111 nodes with 444 AMD Opteron 275 or AMD Opteron 2216 processors and 100 TB of storage. Over time, Red has grown to 344 nodes with 7,280 cores mixed between Intel Xeon processors and AMD Opteron processors and 7 PB of storage using the Hadoop Distributed File System.
Red's primary focus is the CMS project in Switzerland, including the recent discovery of the LIGO gravitational wave discovery.
Attic is HCC's near-line data archival system for researchers to use either in aggregation with the computing resources offered, or independently. Attic currently has 1 PB of available data storage backed up daily at both Omaha and Lincoln locations.
Anvil is HCC's cloud computing resource, based on the OpenStack software. Anvil allows researchers to create virtual machines to do research or test concepts not well suited to a cluster environment or where root access is needed. Anvil currently has 1,500 cores, 19,400 GB of memory, and 500 TB of available storage for use by researchers.
RCF, Research Core Foundation, was used from March 1999 to January 2004. It was an Origin 2000 machine with 8 CPUs, 108 GB of storage, 24 GB of memory in total and a 155 Mbit/s connection to Internet2.
Homestead was the successor to RCF, running from January 2004 to September 2008. Its name comes from Nebraska being a large portion of the Homestead Act of 1862. The cluster consisted of 16 nodes with 2 R10k CPUs, 256 MB of memory, and 6 GB of storage.
Bugeater was the first cluster with the Holland Computing Center, running from October 2000 to 2005. Its namesake is the original University of Nebraska mascot, The Bugeaters. The cluster was a prototype Beowulf cluster consisting of 8 nodes, each with 2 Pentium III CPUs and 20 GB of storage
Sandhills originally was created in February 2002 and the original hard was retired in March 2007. It consisted of 24 nodes, each with 2 Athlon MP CPUs, 1 GB of memory, and 20 GB of storage.
In 2013, Sandhills received a large upgrade to a mix of 8, 12, and 16 core AMD Opteron Processors. The cluster had 108 nodes with 5,472 cores, a total of 18,000 GB of memory and a total of 175 TB of Storage. This revision was retired November 2018 and is now part of the Rhino cluster.
Prairiefire was the first notable cluster with the Holland computing center, ranking in the TOP500 for 3 consecutive years, [6] 2002, 2003, and 2004, placing 107th, 188th, and 292nd respectively. Prairiefire got its namesake from the Nebraska prairies. The cluster ran from August 2002 to 2006. At the time of its 2002 TOP500 placement, it had 128 nodes with 2, dual core AMD AthlonMP CPUs, 2 GB of memory. Prairiefire retired in 2012 when it was merged into the newer Sandhills cluster.
Named after the Merritt Reservoir, Merritt ran from August 2007 to June 2012. Merritt was an Altix 3700 with 64 Itanium2 processors, 512 GB of memory and 8 TB of storage.
Firefly was another notable cluster, placing 43rd in the TOP500 [7] at its creation in 2007. Before retiring in July 2013, Firefly consisted of 1151 nodes, each with 2 dual core AMD Opteron processors, 8 GB of memory, and a total of 150 TB of storage. During its use, 140 nodes were upgraded to dual quad core engineering samples from AMD.
Tusker was the Holland Computer Center's high memory cluster, designed for researchers to be able to run jobs requiring a large quantity of memory with nodes ranging from 256 GB to 1 TB of memory per node. In total, Tusker had 5200 cores, 22 TB of memory, and 500 TB of Lustre storage space. Tusker was retired April 2019 and is now part of the Rhino cluster.
Floating point operations per second is a measure of computer performance in computing, useful in fields of scientific computations that require floating-point calculations.
MareNostrum is the main supercomputer in the Barcelona Supercomputing Center. It is the most powerful supercomputer in Spain, one of thirteen supercomputers in the Spanish Supercomputing Network and one of the seven supercomputers of the European infrastructure PRACE.
The Texas Advanced Computing Center (TACC) at the University of Texas at Austin, United States, is an advanced computing research center that is based on comprehensive advanced computing resources and supports services to researchers in Texas and across the U.S. The mission of TACC is to enable discoveries that advance science and society through the application of advanced computing technologies. Specializing in high performance computing, scientific visualization, data analysis & storage systems, software, research & development and portal interfaces, TACC deploys and operates advanced computational infrastructure to enable the research activities of faculty, staff, and students of UT Austin. TACC also provides consulting, technical documentation, and training to support researchers who use these resources. TACC staff members conduct research and development in applications and algorithms, computing systems design/architecture, and programming tools and environments.
The Cray XT3 is a distributed memory massively parallel MIMD supercomputer designed by Cray Inc. with Sandia National Laboratories under the codename Red Storm. Cray turned the design into a commercial product in 2004. The XT3 derives much of its architecture from the previous Cray T3E system, and also from the Intel ASCI Red supercomputer.
Roadrunner was a supercomputer built by IBM for the Los Alamos National Laboratory in New Mexico, USA. The US$100-million Roadrunner was designed for a peak performance of 1.7 petaflops. It achieved 1.026 petaflops on May 25, 2008, to become the world's first TOP500 LINPACK sustained 1.0 petaflops system.
CLUMEQ was a Supercomputer based in McGill University founded in 2001 and has received two successive grants from the Canada Foundation for innovation. In 2011 CLUMEQ and its partner organization RQCHP were consolidated into a new consortium Calcul Québec.
The Firefly computer is a high-performance computer cluster housed at the Holland Computing Center located inside of the Peter Kiewit Institute at the University of Nebraska Omaha.
The National Center for Computational Sciences (NCCS) is a United States Department of Energy (DOE) Leadership Computing Facility that houses the Oak Ridge Leadership Computing Facility (OLCF), a DOE Office of Science User Facility charged with helping researchers solve challenging scientific problems of global interest with a combination of leading high-performance computing (HPC) resources and international expertise in scientific computing.
Jaguar or OLCF-2 was a petascale supercomputer built by Cray at Oak Ridge National Laboratory (ORNL) in Oak Ridge, Tennessee. The massively parallel Jaguar had a peak performance of just over 1,750 teraFLOPS. It had 224,256 x86-based AMD Opteron processor cores, and operated with a version of Linux called the Cray Linux Environment. Jaguar was a Cray XT5 system, a development from the Cray XT4 supercomputer.
Brutus is the central high-performance cluster of ETH Zurich. It was introduced to the public in May 2008. A new computing cluster called EULER has been announced and opened to the public in May 2014.
Kraken was a Cray XT5 supercomputer that entered into full production mode on February 2, 2009. Kraken was operated by the University of Tennessee and was the most powerful computer in the world managed by academia at the time. It was housed in the Oak Ridge Leadership Computing facility at Oak Ridge National Laboratory. Kraken was decommissioned on April 30, 2014.
Titan or OLCF-3 was a supercomputer built by Cray at Oak Ridge National Laboratory for use in a variety of science projects. Titan was an upgrade of Jaguar, a previous supercomputer at Oak Ridge, that uses graphics processing units (GPUs) in addition to conventional central processing units (CPUs). Titan was the first such hybrid to perform over 10 petaFLOPS. The upgrade began in October 2011, commenced stability testing in October 2012 and it became available to researchers in early 2013. The initial cost of the upgrade was US$60 million, funded primarily by the United States Department of Energy.
Rossmann is a supercomputer at Purdue University that went into production September 1, 2010. The high-performance computing cluster is operated by Information Technology at Purdue (ITaP), the university's central information technology organization. ITaP also operates clusters named Steele built in 2008, Coates built in 2009, Hansen built in the summer of 2011 and Carter built in the fall of 2012 in partnership with Intel. Rossmann ranked 126 on the November 2010 TOP500 list.
XK7 is a supercomputing platform, produced by Cray, launched on October 29, 2012. XK7 is the second platform from Cray to use a combination of central processing units ("CPUs") and graphical processing units ("GPUs") for computing; the hybrid architecture requires a different approach to programming to that of CPU-only supercomputers. Laboratories that host XK7 machines host workshops to train researchers in the new programming languages needed for XK7 machines. The platform is used in Titan, the world's second fastest supercomputer in the November 2013 list as ranked by the TOP500 organization. Other customers include the Swiss National Supercomputing Centre which has a 272 node machine and Blue Waters has a machine that has Cray XE6 and XK7 nodes that performs at approximately 1 petaFLOPS (1015 floating-point operations per second).
Summit or OLCF-4 is a supercomputer developed by IBM for use at Oak Ridge Leadership Computing Facility (OLCF), a facility at the Oak Ridge National Laboratory, capable of 200 petaFLOPS thus making it the 9th fastest supercomputer in the world after Frontier (OLCF-5), Aurora, Eagle, Fugaku, LUMI, Alps, Leonardo, and MareNostrum 5 ACC with Frontier being the fastest. It held the number 1 position from November 2018 to June 2020. Its current LINPACK benchmark is clocked at 148.6 petaFLOPS.
The Nvidia DGX represents a series of servers and workstations designed by Nvidia, primarily geared towards enhancing deep learning applications through the use of general-purpose computing on graphics processing units (GPGPU). These systems typically come in a rackmount format featuring high-performance x86 server CPUs on the motherboard.
The Sunway BlueLight (神威蓝光) is a Chinese massively parallel supercomputer. It is the first publicly announced PFLOPS supercomputer using Sunway processors solely developed by the People's Republic of China.
Hewlett Packard Enterprise Frontier, or OLCF-5, is the world's first exascale supercomputer. It is hosted at the Oak Ridge Leadership Computing Facility (OLCF) in Tennessee, United States and became operational in 2022. As of December 2023, Frontier is the world's fastest supercomputer. It is based on the Cray EX and is the successor to Summit (OLCF-4). Frontier achieved an Rmax of 1.102 exaFLOPS, which is 1.102 quintillion floating-point operations per second, using AMD CPUs and GPUs.
Leonardo is a petascale supercomputer located at the CINECA datacenter in Bologna, Italy. The system consists of an Atos BullSequana XH2000 computer, with close to 14,000 Nvidia Ampere GPUs and 200 Gbit/s Nvidia Mellanox HDR InfiniBand connectivity. Inaugurated in November 2022, Leonardo is capable of 250 petaflops, making it one of the top five fastest supercomputers in the world. It debuted on the TOP500 in November 2022 ranking fourth in the world, and second in Europe.
The University of Nebraska–Lincoln College of Engineering is the engineering college at the University of Nebraska–Lincoln (NU) in Lincoln, Nebraska. NU has offered engineering classes since 1877 and the College of Engineering was formally established in 1909. Since 1970, it has also encompassed the engineering students and facilities at the University of Nebraska Omaha. Lance Perez has served as dean of the college since 2018.