Cactus Framework

Last updated
Cactus Framework
Developer(s) Cactus Team
Stable release
4.17.0 / December 3 2024
Operating system Cross-platform
Type Software framework
License LGPL
Website www.cactuscode.org

Cactus is an open-source, problem-solving environment designed for scientists and engineers. [1] [2] Its modular structure enables parallel computation across different architectures and collaborative code development between different groups. Cactus originated in the academic research community, where it was developed and used over many years by a large international collaboration of physicists and computational scientists.

Contents

The name Cactus comes from the design of a central core (or "flesh") which connects to application modules (or "thorns") through an extensible interface. Thorns can implement custom developed scientific or engineering applications, such as computational fluid dynamics. Other thorns from a standard computational toolkit provide a range of computational capabilities, such as parallel I/O, data distribution, or checkpointing. [3] [4]

Cactus runs on many architectures. Applications, developed on standard workstations or laptops, can be seamlessly run on clusters or supercomputers. Cactus provides easy access to many cutting-edge software technologies being developed in the academic research community, including the Globus Toolkit, HDF5 parallel file I/O, the PETSc scientific library, adaptive mesh refinement, web interfaces, and advanced visualization tools.

History

Cactus was originally developed at the AEI, and is now developed jointly at AEI, Cardiff University, and the Center for Computation & Technology at LSU. There are several large packages built on Cactus, among others a general relativistic spacetime evolution code, an adaptive mesh refinement driver (Carpet), and a general relativistic hydrodynamics code (Whisky).

Staff with the LSU Center for Computation & Technology, who were part of the original group at AEI who created Cactus, celebrated the program's 10th birthday in April 2007.

See also

Related Research Articles

<span class="mw-page-title-main">Supercomputer</span> Type of extremely powerful computer

A supercomputer is a type of computer with a high level of performance as compared to a general-purpose computer. The performance of a supercomputer is commonly measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). Since 2022, supercomputers have existed which can perform over 1018 FLOPS, so called exascale supercomputers. For comparison, a desktop computer has performance in the range of hundreds of gigaFLOPS (1011) to tens of teraFLOPS (1013). Since November 2017, all of the world's fastest 500 supercomputers run on Linux-based operating systems. Additional research is being conducted in the United States, the European Union, Taiwan, Japan, and China to build faster, more powerful and technologically superior exascale supercomputers.

Grid computing is the use of widely distributed computer resources to reach a common goal. A computing grid can be thought of as a distributed system with non-interactive workloads that involve many files. Grid computing is distinguished from conventional high-performance computing systems such as cluster computing in that grid computers have each node set to perform a different task/application. Grid computers also tend to be more heterogeneous and geographically dispersed than cluster computers. Although a single grid can be dedicated to a particular application, commonly a grid is used for a variety of purposes. Grids are often constructed with general-purpose grid middleware software libraries. Grid sizes can be quite large.

<span class="mw-page-title-main">Gerald Jay Sussman</span> American computer scientist

Gerald Jay Sussman is the Panasonic Professor of Electrical Engineering at the Massachusetts Institute of Technology (MIT). He has been involved in artificial intelligence (AI) research at MIT since 1964. His research has centered on understanding the problem-solving strategies used by scientists and engineers, with the goals of automating parts of the process and formalizing it to provide more effective methods of science and engineering education. Sussman has also worked in computer languages, in computer architecture, and in Very Large Scale Integration (VLSI) design.

In plasma physics, the particle-in-cell (PIC) method refers to a technique used to solve a certain class of partial differential equations. In this method, individual particles in a Lagrangian frame are tracked in continuous phase space, whereas moments of the distribution such as densities and currents are computed simultaneously on Eulerian (stationary) mesh points.

General-purpose computing on graphics processing units is the use of a graphics processing unit (GPU), which typically handles computation only for computer graphics, to perform computation in applications traditionally handled by the central processing unit (CPU). The use of multiple video cards in one computer, or large numbers of graphics chips, further parallelizes the already parallel nature of graphics processing.

Ian Tremere Foster is a New Zealand-American computer scientist. He is a distinguished fellow, senior scientist, and director of the Data Science and Learning division at Argonne National Laboratory, and a professor in the department of computer science at the University of Chicago.

<span class="mw-page-title-main">David Bader (computer scientist)</span> American computer scientist

David A. Bader is a Distinguished Professor and Director of the Institute for Data Science at the New Jersey Institute of Technology. Previously, he served as the Chair of the Georgia Institute of Technology School of Computational Science & Engineering, where he was also a founding professor, and the executive director of High-Performance Computing at the Georgia Tech College of Computing. In 2007, he was named the first director of the Sony Toshiba IBM Center of Competence for the Cell Processor at Georgia Tech.

<span class="mw-page-title-main">Finite-difference time-domain method</span> Numerical analysis technique

Finite-difference time-domain (FDTD) or Yee's method is a numerical analysis technique used for modeling computational electrodynamics. Since it is a time-domain method, FDTD solutions can cover a wide frequency range with a single simulation run, and treat nonlinear material properties in a natural way.

Nimrod is a tool for the parametrization of serial programs to create and execute embarrassingly parallel programs over a computational grid. It is a co-allocating, scheduling and brokering service. Nimrod was one of the first tools to make use of heterogeneous resources in a grid for a single computation. It was also an early example of using a market economy to perform grid scheduling. This enables Nimrod to provide a guaranteed completion time despite using best-effort services.

The Simple API for Grid Applications (SAGA) is a family of related standards specified by the Open Grid Forum to define an application programming interface (API) for common distributed computing functionality.

Edward Seidel is an American academic administrator and scientist serving as the president of the University of Wyoming since July 1, 2020. He previously served as the Vice President for Economic Development and Innovation for the University of Illinois System, as well as a Founder Professor in the Department of Physics and a professor in the Department of Astronomy at the University of Illinois at Urbana-Champaign. He was the director of the National Center for Supercomputing Applications at Illinois from 2014 to 2017.

HPX, short for High Performance ParalleX, is a runtime system for high-performance computing. It is currently under active development by the STE||AR group at Louisiana State University. Focused on scientific computing, it provides an alternative execution model to conventional approaches such as MPI. HPX aims to overcome the challenges MPI faces with increasing large supercomputers by using asynchronous communication between nodes and lightweight control objects instead of global barriers, allowing application developers to exploit fine-grained parallelism.

Osman Yaşar is Empire Innovation Professor at the Computational Science (CPS) department at State University of New York (SUNY) College at Brockport. He holds 3 master's degrees and a Ph.D. degree. His area of interest is supercomputing applications, computational fluid and particle dynamics, engine combustion modeling, parallel computing, plasma and radiation hydrodynamics, and adaptive mesh refinement. He established the first undergraduate program in computational science in the United States. He also established computational approach to math, science, and technology (C-MST) as a pedagogy at K-12 level. Dr. Yaşar testified before U.S. Congress about his efforts in improving math and science education.

<span class="mw-page-title-main">Distributed European Infrastructure for Supercomputing Applications</span> Organization

Distributed European Infrastructure for Supercomputing Applications (DEISA) was a consortium of major national supercomputing centres in Europe. Initiated in 2002, it became a European Union funded supercomputer project. The consortium of eleven national supercomputing centres from seven European countries promoted pan-European research on European high-performance computing systems by creating a European collaborative environment in the area of supercomputing.

The Sidney Fernbach Award established in 1992 by the IEEE Computer Society, in memory of Sidney Fernbach, one of the pioneers in the development and application of high performance computers for the solution of large computational problems as the Division Chief for the Computation Division at Lawrence Livermore Laboratory from the late 1950s through the 1970s. A certificate and $2,000 are awarded for outstanding contributions in the application of high performance computers using innovative approaches. The nomination deadline is 1 July each year.

<span class="mw-page-title-main">SimGrid</span> Toolkit for distributed computing

SimGrid is a framework for developing simulators of distributed applications targeting distributed platforms, which can in turn be used to prototype, evaluate and compare relevant platform configurations, system designs, and algorithmic approaches. It provides ready to use models and APIs to simulate popular distributed computing platforms

Science gateways provide access to advanced resources for science and engineering researchers, educators, and students. Through streamlined, online, user-friendly interfaces, gateways combine a variety of cyberinfrastructure (CI) components in support of a community-specific set of tools, applications, and data collections.: In general, these specialized, shared resources are integrated as a Web portal, mobile app, or a suite of applications. Through science gateways, broad communities of researchers can access diverse resources which can save both time and money for themselves and their institutions. As listed below, functions and resources offered by science gateways include shared equipment and instruments, computational services, advanced software applications, collaboration capabilities, data repositories, and networks.

Elastix is an image registration toolbox built upon the Insight Segmentation and Registration Toolkit (ITK). It is entirely open-source and provides a wide range of algorithms employed in image registration problems. Its components are designed to be modular to ease a fast and reliable creation of various registration pipelines tailored for case-specific applications. It was first developed by Stefan Klein and Marius Staring under the supervision of Josien P.W. Pluim at Image Sciences Institute (ISI). Its first version was command-line based, allowing the final user to employ scripts to automatically process big data-sets and deploy multiple registration pipelines with few lines of code. Nowadays, to further widen its audience, a version called SimpleElastix is also available, developed by Kasper Marstal, which allows the integration of elastix with high level languages, such as Python, Java, and R.

Gabrielle D. Allen is a British and American computational astrophysicist known for her work in astrophysical simulations and multi-messenger astronomy, and as one of the original developers of the Cactus Framework for parallel scientific computation. She is a professor of mathematics and statistics at the University of Wyoming.

References

  1. Allen, Gabrielle; Benger, Werner; Goodale, Tom; Hege, Hans-Christian; Lanfermann, Gerd; Merzky, Andre; Radke, Thomas; Seidel, Edward; Shalf, John (1999). "Solving Einstein's equations on supercomputers" (PDF). Computer. 32 (12): 52–58. doi:10.1109/2.809251 . Retrieved 2021-07-22.
  2. Allen, Gabrielle; Benger, Werner; Goodale, Tom; Hege, Hans-Christian; Lanfermann, Gerd; Merzky, Andre; Radke, Thomas; Seidel, Edward; Shalf, John (2000). "The cactus code: A problem solving environment for the grid" (PDF). Proceedings the Ninth International Symposium on High-Performance Distributed Computing. IEEE. pp. 253–260. doi:10.1109/HPDC.2000.868657 . Retrieved 2021-07-22.
  3. Goodale, Tom; Allen, Gabrielle; Lanfermann, Gerd; Massò, Joan; Radke, Thomas; Seidel, Edward; Shalf, John (2003). "The Cactus framework and toolkit: Design and applications". High Performance Computing for Computational Science - VECPAR 2002: 5th International Conference. LNCS, Vol. 2565. Springer. pp. 197–227. CiteSeerX   10.1.1.98.8838 . doi:10.1007/3-540-36569-9_13.
  4. Allen, Gabrielle; Goodale, Tom; Löffler, Frank; Rideout, David; Schnetter, Erik; Seidel, Erik L. (2010). "Component specification in the Cactus Framework: The Cactus Configuration Language". 11th IEEE/ACM International Conference on Grid Computing. IEEE. pp. 359–368. arXiv: 1009.1341 . doi:10.1109/GRID.2010.5698008.