Storage Resource Manager

Last updated
The logo for SRM. SRM logo1.png
The logo for SRM.

Storage Resource Management (SRM) technology was initiated by the Scientific Data Management Group at Lawrence Berkeley National Laboratory (LBNL). This group is part of the Computational Research Division at LBNL and focuses on developing technologies for efficient data access, storage, analysis, and the management of massive scientific datasets. It is also a suite of software solutions developed to address the needs of managing large datasets across various storage systems.

Contents

The SRM Middleware Project, specifically, provides tools for dynamic storage management which helps prevent data loss, and ensures the efficient handling of large volumes of data. This project is part of an international collaboration that includes [1] CERN, DESY, FNAL, ICTO, INFN, LBNL, RAL, TJNAF

The SRM Working Group is an international collaboration that has been involved in creating specifications for SRM, detailing its functionality and interface design.

The SRM specifications have evolved, with the current version being SRM v2.2, which was revised with minor changes in 2009. [2] These specifications have formed a standardized approach to storage resource management across different platforms.

See also

Related Research Articles

<span class="mw-page-title-main">Lawrence Berkeley National Laboratory</span> National laboratory located near Berkeley, California, U.S.

Lawrence Berkeley National Laboratory (LBNL) is a federally funded research and development center in the hills of Berkeley, California, United States. Established in 1931 by the University of California (UC), the laboratory is sponsored by the United States Department of Energy and administered by the UC system. Ernest Lawrence, who won the Nobel prize for inventing the cyclotron, founded the Lab and served as its Director until his death in 1958. Located in the Berkeley Hills, the lab overlooks the campus of the University of California, Berkeley.

An electronic lab notebook is a computer program designed to replace paper laboratory notebooks. Lab notebooks in general are used by scientists, engineers, and technicians to document research, experiments, and procedures performed in a laboratory. A lab notebook is often maintained to be a legal document and may be used in a court of law as evidence. Similar to an inventor's notebook, the lab notebook is also often referred to in patent prosecution and intellectual property litigation.

A federal enterprise architecture framework (FEAF) is the U.S. reference enterprise architecture of a federal government. It provides a common approach for the integration of strategic, business and technology management as part of organization design and performance improvement.

<span class="mw-page-title-main">Energy Sciences Network</span>

The Energy Sciences Network (ESnet) is a high-speed computer network serving United States Department of Energy (DOE) scientists and their collaborators worldwide. It is managed by staff at the Lawrence Berkeley National Laboratory.

<span class="mw-page-title-main">National Energy Research Scientific Computing Center</span> Supercomputer facility operated by the US Department of Energy in Berkeley, California

The National Energy Research Scientific Computing Center (NERSC), is a high-performance computing (supercomputer) National User Facility operated by Lawrence Berkeley National Laboratory for the United States Department of Energy Office of Science. As the mission computing center for the Office of Science, NERSC houses high performance computing and data systems used by 9,000 scientists at national laboratories and universities around the country. Research at NERSC is focused on fundamental and applied research in energy efficiency, storage, and generation; Earth systems science, and understanding of fundamental forces of nature and the universe. The largest research areas are in High Energy Physics, Materials Science, Chemical Sciences, Climate and Environmental Sciences, Nuclear Physics, and Fusion Energy research. NERSC's newest and largest supercomputer is Perlmutter, which debuted in 2021 ranked 5th on the TOP500 list of world's fastest supercomputers.

The European Committee for Standardization (CEN) Standard Architecture for Healthcare Information Systems, Health Informatics Service Architecture or HISA is a standard that provides guidance on the development of modular open information technology (IT) systems in the healthcare sector. Broadly, architecture standards outline frameworks which can be used in the development of consistent, coherent applications, databases and workstations. This is done through the definition of hardware and software construction requirements and outlining of protocols for communications. The HISA standard provides a formal standard for a service-oriented architecture (SOA), specific for the requirements of health services, based on the principles of Open Distributed Processing. The HISA standard evolved from previous work on healthcare information systems architecture commenced by Reseau d’Information et de Communication Hospitalier Europeen (RICHE) in 1989, and subsequently built upon by a number of organizations across Europe.

High Performance Storage System (HPSS) is a flexible, scalable, policy-based, software-defined Hierarchical Storage Management product developed by the HPSS Collaboration. It provides scalable hierarchical storage management (HSM), archive, and file system services using cluster, LAN and SAN technologies to aggregate the capacity and performance of many computers, disks, disk systems, tape drives, and tape libraries.

Kepler is a free software system for designing, executing, reusing, evolving, archiving, and sharing scientific workflows. Kepler's facilities provide process and data monitoring, provenance information, and high-speed data movement. Workflows in general, and scientific workflows in particular, are directed graphs where the nodes represent discrete computational components, and the edges represent paths along which data and results can flow between components. In Kepler, the nodes are called 'Actors' and the edges are called 'channels'. Kepler includes a graphical user interface for composing workflows in a desktop environment, a runtime engine for executing workflows within the GUI and independently from a command-line, and a distributed computing option that allows workflow tasks to be distributed among compute nodes in a computer cluster or computing grid. The Kepler system principally targets the use of a workflow metaphor for organizing computational tasks that are directed towards particular scientific analysis and modeling goals. Thus, Kepler scientific workflows generally model the flow of data from one step to another in a series of computations that achieve some scientific goal.

<span class="mw-page-title-main">View model</span>

A view model or viewpoints framework in systems engineering, software engineering, and enterprise engineering is a framework which defines a coherent set of views to be used in the construction of a system architecture, software architecture, or enterprise architecture. A view is a representation of the whole system from the perspective of a related set of concerns.

gLite Grid computing software

gLite is a middleware computer software project for grid computing used by the CERN LHC experiments and other scientific domains. It was implemented by collaborative efforts of more than 80 people in 12 different academic and industrial research centers in Europe. gLite provides a framework for building applications tapping into distributed computing and storage resources across the Internet. The gLite services were adopted by more than 250 computing centres, and used by more than 15000 researchers in Europe and around the world.

John B. Bell is an American mathematician and the Chief Scientist of the Computational Research Division at the Lawrence Berkeley National Laboratory. He has made contributions in the areas of finite difference methods, numerical methods for low Mach number flows, adaptive mesh refinement, interface tracking and parallel computing. He has also worked on the application of these numerical methods to problems from a broad range of fields, including combustion, shock physics, seismology, flow in porous media and astrophysics.

<span class="mw-page-title-main">John Harris (physicist)</span> American experimental physicist

John William Harris is an American experimental high energy nuclear physicist and D. Allan Bromley Professor of Physics at Yale University. His research interests are focused on understanding high energy density QCD and the quark–gluon plasma created in relativistic collisions of heavy ions. Dr. Harris collaborated on the original proposal to initiate a high energy heavy ion program at Cern in Geneva, Switzerland, has been actively involved in the CERN heavy ion program and was the founding spokesperson for the STAR collaboration at RHIC at Brookhaven National Laboratory in the U.S.

<span class="mw-page-title-main">Data grid</span> Set of services used to access, modify and transfer geographical data

A data grid is an architecture or set of services that gives individuals or groups of users the ability to access, modify and transfer extremely large amounts of geographically distributed data for research purposes. Data grids make this possible through a host of middleware applications and services that pull together data and resources from multiple administrative domains and then present it to users upon request. The data in a data grid can be located at a single site or multiple sites where each site can be its own administrative domain governed by a set of security restrictions as to who may access the data. Likewise, multiple replicas of the data may be distributed throughout the grid outside their original administrative domain and the security restrictions placed on the original data for who may access it must be equally applied to the replicas. Specifically developed data grid middleware is what handles the integration between users and the data they request by controlling access while making it available as efficiently as possible. The adjacent diagram depicts a high level view of a data grid.

<span class="mw-page-title-main">NCAR-Wyoming Supercomputing Center</span> High performance computing center in Wyoming, US

The NCAR-Wyoming Supercomputing Center (NWSC) is a high-performance computing (HPC) and data archival facility located in Cheyenne, Wyoming, that provides advanced computing services to researchers in the Earth system sciences.

Cloud management is the management of cloud computing products and services.

The High-performance Integrated Virtual Environment (HIVE) is a distributed computing environment used for healthcare-IT and biological research, including analysis of Next Generation Sequencing (NGS) data, preclinical, clinical and post market data, adverse events, metagenomic data, etc. Currently it is supported and continuously developed by US Food and Drug Administration, George Washington University, and by DNA-HIVE, WHISE-Global and Embleema. HIVE currently operates fully functionally within the US FDA supporting wide variety (+60) of regulatory research and regulatory review projects as well as for supporting MDEpiNet medical device postmarket registries. Academic deployments of HIVE are used for research activities and publications in NGS analytics, cancer research, microbiome research and in educational programs for students at GWU. Commercial enterprises use HIVE for oncology, microbiology, vaccine manufacturing, gene editing, healthcare-IT, harmonization of real-world data, in preclinical research and clinical studies.

<span class="mw-page-title-main">Arthur M. Poskanzer</span> American physicist (1931–2021)

Arthur M. Poskanzer was an experimental physicist, known for his pioneering work on relativistic nuclear collisions.

The Australian Geoscience Data Cube (AGDC) is an approach to storing, processing and analyzing large collections of Earth observation data. The technology is designed to meet challenges of national interest by being agile and flexible with vast amounts of layered grid data.

Natalie Ann Roe is an experimental particle physicist and observational cosmologist, and the Associate Laboratory Director for the Physical Sciences Area at Lawrence Berkeley National Laboratory (LBNL) since 2020. Previously, she was the Physics Division Director for eight years. She has been awarded as the Fellow of American Physical Society (APS) and American Association for the Advancement of Science (AAAS) for her exceptional scientific career and contributions.

<span class="mw-page-title-main">Kristin Persson</span> American physicist and chemist

Kristin Aslaug Persson is a Swedish/Icelandic American physicist and chemist. She was born in Lund, Sweden, in 1971, to Eva Haettner-Aurelius and Einar Benedikt Olafsson. She is a faculty senior staff scientist at Lawrence Berkeley National Laboratory and the Daniel M. Tellep Distinguished Professor of Materials Science and Engineering at University of California, Berkeley. Currently, she is also the director of the Molecular Foundry, a national user facility managed by the US Department of Energy at Lawrence Berkeley National Laboratory. Persson is the director and founder of the Materials Project, a multi-national effort to compute the properties of all inorganic materials. Her research group focuses on the data-driven computational design and prediction of new materials for clean energy production and storage applications. In 2024, Persson was elected a member of the Royal Swedish Academy of Sciences, in the class of Chemistry.

References

  1. "SRM Working Group". sdm.lbl.gov. Retrieved 2024-02-05.
  2. "SRM Working Group". sdm.lbl.gov. Retrieved 2024-02-05.