This article reads like a press release or a news article and may be largely based on routine coverage .(April 2024) |
The National Energy Research Scientific Computing Center (NERSC) is a high-performance computing (supercomputer) research facility that was founded in 1974. The National User Facility is operated by Lawrence Berkeley National Laboratory for the United States Department of Energy Office of Science.
The mission is to establish a computing center for the Office of Science, NERSC houses high performance computing and data systems which can be used by 9,000 scientists at national laboratories and universities around the country. Research at NERSC is focused on fundamental and applied research with energy efficiency, storage, generation and Earth systems science, understanding of fundamental forces of nature and the Universe. The largest research areas are High Energy Physics, Materials Science, Chemical Sciences, Climate and Environmental Sciences, Nuclear Physics, and Fusion Energy research.[ citation needed ]
NERSC was founded in 1974 as the Controlled Thermonuclear Research Computer Center, or CTRCC, at Lawrence Livermore National Laboratory (LLNL). The center was created to provide computing resources to the fusion energy research community and began with a Control Data Corporation 6600 computer (SN-1). The first machine procured directly by the center was a CDC 7600, installed in 1975 with a peak performance of 36 megaflop/s (36 million floating point operations per second). In 1976, the center was renamed the National Magnetic Fusion Energy Computer Center.[ citation needed ]
Subsequent supercomputers include a Cray-1 (SN-6), which was installed in May 1978 and called the "c" machine. In 1985, the world's first Cray-2 (SN-1) was installed as the "b" machine. The bubbles visible in the fluid of the Cray-2's direct liquid cooling system earned it the nickname "Bubbles."[ citation needed ]
In 1983, the center began providing a small portion of its resources to researchers outside the fusion community. As the center increasingly supported science across many research areas, it changed its name to the National Energy Research Supercomputer Center in 1990.[ citation needed ]
In 1995, the Department of Energy (DOE) moved NERSC from the Lawrence Livermore National Laboratory to Lawrence Berkeley National Laboratory. A cluster of Cray J90 systems was installed in Berkeley before the main systems at Livermore were shut down for the move in 1996 to provide continuous support for the research community. As a part of the move, the center was renamed the National Energy Research Scientific Computing Center, but kept the NERSC acronym. In 2000, NERSC moved to a new site in Oakland to accommodate the growing footprint of air-cooled supercomputers.[ citation needed ]
In November 2015, NERSC moved back to the main Berkeley Lab site and is housed in Shyh Wang Hall, an energy-efficient supercomputer facility. [1] [2] The building was financed by the University of California which manages Berkeley Lab for the U.S. Department of Energy (DOE). As with the move from LLNL, a new system was first installed in Berkeley before the machines in Oakland were taken down and moved. The utility infrastructure and computer systems are provided by the DOE.[ citation needed ]
The center names its major systems after scientists.
The newest supercomputer Perlmutter, is named after Saul Perlmutter, an astrophysicist at Berkeley Lab who shared the 2011 Nobel Prize in Physics for his contributions to research showing that the expansion of the universe is accelerating. It is a Cray system based on the Shasta architecture, with Zen 3 based AMD Epyc CPUs ("Milan") and NVIDIA Ampere GPUs. [3] Perlmutter debuted in 2021 and is ranked 5th on the TOP500 list of world's fastest supercomputers.
Another NERSC supercomputer is Cori, named after Gerty Cori, a biochemist who was the first American woman to receive a Nobel Prize in science. Cori is a Cray XC40 system with 622,336 Intel processor cores and a theoretical peak performance of 30 petaflop/s (30 quadrillion operations per second). Cori was delivered in two phases. The first phase—also known as the Data Partition—was installed in late 2015 and comprises 12 cabinets and more than 1,600 Intel Xeon "Haswell" compute nodes. The second phase [4] of Cori, installed in summer 2016, [5] added 52 cabinets and more than 9,300 nodes with second-generation Intel Xeon Phi processors (code-named Knights Landing, or KNL for short), making Cori the largest[ citation needed ] supercomputing system for open science based on KNL processors.
NERSC also houses a 200+ petabyte [6] High Performance Storage System (HPSS) for archival mass storage, in use since 1998.
NERSC facilities are accessible through the Energy Sciences Network, or ESnet, which is also managed by Lawrence Berkeley National Laboratory for the Department of Energy.
NERSC staff lead projects in computational science while also helping prepare the broader research community for the exascale era.
NESAP: The NERSC Exascale Science Applications Program partners with code teams and library and tool developers to prepare applications to use Cori's manycore architecture. Researchers prepare application codes for the new architecture. The NESAP partnership allows 20 projects to collaborate with NERSC, Cray, and Intel by providing access to early hardware, training, and preparation sessions with Intel and Cray staff. Eight of those 20 projects will also have an opportunity for a postdoctoral researcher to investigate computational science issues associated with energy-efficient many-core systems.
Shifter: Shifter is an open-source software tool based on Docker containers that enables NERSC users to analyze datasets from experimental facilities. Such containers allow an application to be packaged with its entire software stack—including some portions of the base OS files—as well as defining user environment variables and application "entry point".
HPC4Mfg (High Performance Computing for Manufacturing): NERSC is one of three DOE supercomputing centers working to create an ecosystem that allows experts at national laboratories to work directly with manufacturing industry members to teach them how to adopt or advance their use of high performance computing (HPC) to address manufacturing challenges with a goal of increasing energy efficiency, reducing environmental impacts and advancing clean energy technologies. The project is led by Lawrence Livermore National Laboratory.
In 2021 NERSC was acknowledged in more than 2,000 referenced scientific journal publications. Six Nobel Prize winning individuals or teams have used NERSC in their research.[ citation needed ]
In 2022, NERSC supported nearly 9,000 users from universities, national labs, and industries and has users in 50 US states, the District of Columbia, Puerto Rico, and 45 countries. [ citation needed ] NERSC supported researchers from 514 colleges and universities, 26 Department of Energy National Laboratories, 52 organizations in industry, 31 small businesses, 115 other government labs, and 19 non-profit organizations.[ citation needed ]
Lawrence Livermore National Laboratory (LLNL) is a federally funded research and development center in Livermore, California, United States. Originally established in 1952, the laboratory now is sponsored by the United States Department of Energy and administered privately by Lawrence Livermore National Security, LLC.
Lawrence Berkeley National Laboratory is a federally funded research and development center in the hills of Berkeley, California, United States. Established in 1931 by the University of California (UC), the laboratory is sponsored by the United States Department of Energy and administered by the UC system. Ernest Lawrence, who won the Nobel prize for inventing the cyclotron, founded the lab and served as its director until his death in 1958. Located in the Berkeley Hills, the lab overlooks the campus of the University of California, Berkeley.
The United States Department of Energy National Laboratories and Technology Centers is a system of laboratories overseen by the United States Department of Energy (DOE) for scientific and technological research. The primary mission of the DOE national laboratories is to conduct research and development (R&D) addressing national priorities: energy and climate, the environment, national security, and health. Sixteen of the seventeen DOE national laboratories are federally funded research and development centers administered, managed, operated and staffed by private-sector organizations under management and operating (M&O) contracts with the DOE. The National Laboratory system was established in the wake of World War II, during which the United States had quickly set-up and pursued advanced scientific research in the sprawling Manhattan Project.
The Cray Time Sharing System, also known in the Cray user community as CTSS, was developed as an operating system for the Cray-1 or Cray X-MP line of supercomputers in 1978. CTSS was developed by the Los Alamos Scientific Laboratory in conjunction with the Lawrence Livermore Laboratory. CTSS was popular with Cray sites in the United States Department of Energy (DOE), but was used by several other Cray sites, such as the San Diego Supercomputing Center.
The Oak Ridge Leadership Computing Facility (OLCF), formerly the National Leadership Computing Facility, is a designated user facility operated by Oak Ridge National Laboratory and the Department of Energy. It contains several supercomputers, the largest of which is an HPE OLCF-5 named Frontier, which was ranked 1st on the TOP500 list of world's fastest supercomputers as of June 2023. It is located in Oak Ridge, Tennessee.
The TOP500 project ranks and details the 500 most powerful non-distributed computer systems in the world. The project was started in 1993 and publishes an updated list of the supercomputers twice a year. The first of these updates always coincides with the International Supercomputing Conference in June, and the second is presented at the ACM/IEEE Supercomputing Conference in November. The project aims to provide a reliable basis for tracking and detecting trends in high-performance computing and bases rankings on HPL benchmarks, a portable implementation of the high-performance LINPACK benchmark written in Fortran for distributed-memory computers.
High Performance Storage System (HPSS) is a flexible, scalable, policy-based, software-defined hierarchical storage management (HSM) product developed by the HPSS Collaboration. It provides scalable HSM, archive, and file system services using cluster, LAN and storage area network (SAN) technologies to aggregate the capacity and performance of many computers, disks, disk systems, tape drives, and tape libraries.
The Office of Science is a component of the United States Department of Energy (DOE). The Office of Science is the lead federal agency supporting fundamental scientific research for energy and the Nation’s largest supporter of basic research in the physical sciences. The Office of Science portfolio has two principal thrusts: direct support of scientific research and direct support of the development, construction, and operation of unique, open-access scientific user facilities that are made available for use by external researchers.
The National Center for Computational Sciences (NCCS) is a United States Department of Energy (DOE) Leadership Computing Facility that houses the Oak Ridge Leadership Computing Facility (OLCF), a DOE Office of Science User Facility charged with helping researchers solve challenging scientific problems of global interest with a combination of leading high-performance computing (HPC) resources and international expertise in scientific computing.
Shaheen is the name of a series of supercomputers owned and operated by King Abdullah University of Science and Technology (KAUST), Saudi Arabia. Shaheen is named after the Peregrine Falcon. The most recent model, Shaheen III, is the largest and most powerful supercomputer in the Middle East.
Several centers for supercomputing exist across Europe, and distributed access to them is coordinated by European initiatives to facilitate high-performance computing. One such initiative, the HPC Europa project, fits within the Distributed European Infrastructure for Supercomputing Applications (DEISA), which was formed in 2002 as a consortium of eleven supercomputing centers from seven European countries. Operating within the CORDIS framework, HPC Europa aims to provide access to supercomputers across Europe.
Xeon Phi is a discontinued series of x86 manycore processors designed and made by Intel. It was intended for use in supercomputers, servers, and high-end workstations. Its architecture allowed use of standard programming languages and application programming interfaces (APIs) such as OpenMP.
Appro was a developer of supercomputing supporting High Performance Computing (HPC) markets focused on medium- to large-scale deployments. Appro was based in Milpitas, California with a computing center in Houston, Texas, and a manufacturing and support subsidiary in South Korea and Japan.
The Cray XC30 is a massively parallel multiprocessor supercomputer manufactured by Cray. It consists of Intel Xeon processors, with optional Nvidia Tesla or Xeon Phi accelerators, connected together by Cray's proprietary "Aries" interconnect, stored in air-cooled or liquid-cooled cabinets. Each liquid-cooled cabinet can contain up to 48 blades, each with eight CPU sockets, and uses 90 kW of power. The XC series supercomputers are available with the Cray DataWarp applications I/O accelerator technology.
Trinity is a United States supercomputer built by the National Nuclear Security Administration (NNSA) for the Advanced Simulation and Computing Program (ASC). The aim of the ASC program is to simulate, test, and maintain the United States nuclear stockpile.
The Cray XC40 is a massively parallel multiprocessor supercomputer manufactured by Cray. It consists of Intel Haswell Xeon processors, with optional Nvidia Tesla or Intel Xeon Phi accelerators, connected together by Cray's proprietary "Aries" interconnect, stored in air-cooled or liquid-cooled cabinets. The XC series supercomputers are available with the Cray DataWarp applications I/O accelerator technology.
Perlmutter is a supercomputer delivered to the National Energy Research Scientific Computing Center of the United States Department of Energy as the successor to Cori. It is being built by Cray and is based on their Shasta architecture which utilizes Zen 3 based AMD Epyc CPUs ("Milan") and Nvidia Tesla GPUs. Its intended use-cases are nuclear fusion simulations, climate projections, and material and biological research. Phase 1, completed May 27, 2022, reached 70.9 PFLOPS of processing power.
Aurora is an exascale supercomputer that was sponsored by the United States Department of Energy (DOE) and designed by Intel and Cray for the Argonne National Laboratory. It was briefly the second fastest supercomputer in the world from November 2023 to June 2024.
Horst D. Simon is a computer scientist known for his contributions to high-performance computing (HPC) and computational science. He is director of ADIA Lab in Abu Dhabi, UAE and editor of TOP500.