Cactus Framework

Last updated
Cactus Framework
Developer(s) Cactus Team
Stable release
4.15.0 / December 14 2023
Operating system Cross-platform
Type Software framework
License LGPL
Website www.cactuscode.org

Cactus is an open-source, problem-solving environment designed for scientists and engineers. [1] [2] Its modular structure enables parallel computation across different architectures and collaborative code development between different groups. Cactus originated in the academic research community, where it was developed and used over many years by a large international collaboration of physicists and computational scientists.

Contents

The name Cactus comes from the design of a central core (or "flesh") which connects to application modules (or "thorns") through an extensible interface. Thorns can implement custom developed scientific or engineering applications, such as computational fluid dynamics. Other thorns from a standard computational toolkit provide a range of computational capabilities, such as parallel I/O, data distribution, or checkpointing. [3] [4]

Cactus runs on many architectures. Applications, developed on standard workstations or laptops, can be seamlessly run on clusters or supercomputers. Cactus provides easy access to many cutting-edge software technologies being developed in the academic research community, including the Globus Toolkit, HDF5 parallel file I/O, the PETSc scientific library, adaptive mesh refinement, web interfaces, and advanced visualization tools.

History

Cactus was originally developed at the AEI, and is now developed jointly at AEI, Cardiff University, and the Center for Computation & Technology at LSU. There are several large packages built on Cactus, among others a general relativistic spacetime evolution code, an adaptive mesh refinement driver (Carpet), and a general relativistic hydrodynamics code (Whisky).

Staff with the LSU Center for Computation & Technology, who were part of the original group at AEI who created Cactus, celebrated the program's 10th birthday in April 2007.

See also

Related Research Articles

<span class="mw-page-title-main">Supercomputer</span> Type of extremely powerful computer

A supercomputer is a computer with a high level of performance as compared to a general-purpose computer. The performance of a supercomputer is commonly measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). Since 2017, supercomputers have existed which can perform over 1017 FLOPS (a hundred quadrillion FLOPS, 100 petaFLOPS or 100 PFLOPS). For comparison, a desktop computer has performance in the range of hundreds of gigaFLOPS (1011) to tens of teraFLOPS (1013). Since November 2017, all of the world's fastest 500 supercomputers run on Linux-based operating systems. Additional research is being conducted in the United States, the European Union, Taiwan, Japan, and China to build faster, more powerful and technologically superior exascale supercomputers.

Grid computing is the use of widely distributed computer resources to reach a common goal. A computing grid can be thought of as a distributed system with non-interactive workloads that involve many files. Grid computing is distinguished from conventional high-performance computing systems such as cluster computing in that grid computers have each node set to perform a different task/application. Grid computers also tend to be more heterogeneous and geographically dispersed than cluster computers. Although a single grid can be dedicated to a particular application, commonly a grid is used for a variety of purposes. Grids are often constructed with general-purpose grid middleware software libraries. Grid sizes can be quite large.

<span class="mw-page-title-main">Gerald Jay Sussman</span> American computer scientist

Gerald Jay Sussman is the Panasonic Professor of Electrical Engineering at the Massachusetts Institute of Technology (MIT). He has been involved in artificial intelligence (AI) research at MIT since 1964. His research has centered on understanding the problem-solving strategies used by scientists and engineers, with the goals of automating parts of the process and formalizing it to provide more effective methods of science and engineering education. Sussman has also worked in computer languages, in computer architecture, and in Very Large Scale Integration (VLSI) design.

In plasma physics, the particle-in-cell (PIC) method refers to a technique used to solve a certain class of partial differential equations. In this method, individual particles in a Lagrangian frame are tracked in continuous phase space, whereas moments of the distribution such as densities and currents are computed simultaneously on Eulerian (stationary) mesh points.

<span class="mw-page-title-main">ASCI Red</span> Supercomputer

ASCI Red was the first computer built under the Accelerated Strategic Computing Initiative (ASCI), the supercomputing initiative of the United States government created to help the maintenance of the United States nuclear arsenal after the 1992 moratorium on nuclear testing.

<span class="mw-page-title-main">David Bader (computer scientist)</span> American computer scientist

David A. Bader is a Distinguished Professor and Director of the Institute for Data Science at the New Jersey Institute of Technology. Previously, he served as the Chair of the Georgia Institute of Technology School of Computational Science & Engineering, where he was also a founding professor, and the executive director of High-Performance Computing at the Georgia Tech College of Computing. In 2007, he was named the first director of the Sony Toshiba IBM Center of Competence for the Cell Processor at Georgia Tech.

<span class="mw-page-title-main">Finite-difference time-domain method</span>

Finite-difference time-domain (FDTD) or Yee's method is a numerical analysis technique used for modeling computational electrodynamics. Since it is a time-domain method, FDTD solutions can cover a wide frequency range with a single simulation run, and treat nonlinear material properties in a natural way.

<span class="mw-page-title-main">Centre for Development of Advanced Computing</span> Autonomous scientific society

The Centre for Development of Advanced Computing (C-DAC) is an Indian autonomous scientific society, operating under the Ministry of Electronics and Information Technology.

Nimrod is a tool for the parametrization of serial programs to create and execute embarrassingly parallel programs over a computational grid. It is a co-allocating, scheduling and brokering service. Nimrod was one of the first tools to make use of heterogeneous resources in a grid for a single computation. It was also an early example of using a market economy to perform grid scheduling. This enables Nimrod to provide a guaranteed completion time despite using best-effort services.

The Simple API for Grid Applications (SAGA) is a family of related standards specified by the Open Grid Forum to define an application programming interface (API) for common distributed computing functionality.

The Center for Computation and Technology (CCT) is an interdisciplinary research center located on the campus of Louisiana State University in Baton Rouge, Louisiana.

Edward Seidel is an American academic administrator and scientist serving as the president of the University of Wyoming since July 1, 2020. He previously served as the Vice President for Economic Development and Innovation for the University of Illinois System, as well as a Founder Professor in the Department of Physics and a professor in the Department of Astronomy at the University of Illinois at Urbana-Champaign. He was the director of the National Center for Supercomputing Applications at Illinois from 2014 to 2017.

HPX, short for High Performance ParalleX, is a runtime system for high-performance computing. It is currently under active development by the STE||AR group at Louisiana State University. Focused on scientific computing, it provides an alternative execution model to conventional approaches such as MPI. HPX aims to overcome the challenges MPI faces with increasing large supercomputers by using asynchronous communication between nodes and lightweight control objects instead of global barriers, allowing application developers to exploit fine-grained parallelism.

The Sidney Fernbach Award established in 1992 by the IEEE Computer Society, in memory of Sidney Fernbach, one of the pioneers in the development and application of high performance computers for the solution of large computational problems as the Division Chief for the Computation Division at Lawrence Livermore Laboratory from the late 1950s through the 1970s. A certificate and $2,000 are awarded for outstanding contributions in the application of high performance computers using innovative approaches. The nomination deadline is 1 July each year.

<span class="mw-page-title-main">Tachyon (software)</span>

Tachyon is a parallel/multiprocessor ray tracing software. It is a parallel ray tracing library for use on distributed memory parallel computers, shared memory computers, and clusters of workstations. Tachyon implements rendering features such as ambient occlusion lighting, depth-of-field focal blur, shadows, reflections, and others. It was originally developed for the Intel iPSC/860 by John Stone for his M.S. thesis at University of Missouri-Rolla. Tachyon subsequently became a more functional and complete ray tracing engine, and it is now incorporated into a number of other open source software packages such as VMD, and SageMath. Tachyon is released under a permissive license.

<span class="mw-page-title-main">Computational astrophysics</span> Methods and computing tools developed and used in astrophysics research

Computational astrophysics refers to the methods and computing tools developed and used in astrophysics research. Like computational chemistry or computational physics, it is both a specific branch of theoretical astrophysics and an interdisciplinary field relying on computer science, mathematics, and wider physics. Computational astrophysics is most often studied through an applied mathematics or astrophysics programme at PhD level.

<span class="mw-page-title-main">SimGrid</span> Toolkit for distributed computing

SimGrid is a toolkit that provides core functionalities for the simulation of distributed applications in heterogeneous distributed environments. The specific goal of the project is to facilitate research in the area of parallel and distributed large scale systems, such as Grids, P2P systems and Cloud. Its use cases encompass heuristic evaluation, application prototyping or even real application development and tuning.

Science gateways provide access to advanced resources for science and engineering researchers, educators, and students. Through streamlined, online, user-friendly interfaces, gateways combine a variety of cyberinfrastructure (CI) components in support of a community-specific set of tools, applications, and data collections.: In general, these specialized, shared resources are integrated as a Web portal, mobile app, or a suite of applications. Through science gateways, broad communities of researchers can access diverse resources which can save both time and money for themselves and their institutions. As listed below, functions and resources offered by science gateways include shared equipment and instruments, computational services, advanced software applications, collaboration capabilities, data repositories, and networks.

Elastix is an image registration toolbox built upon the Insight Segmentation and Registration Toolkit (ITK). It is entirely open-source and provides a wide range of algorithms employed in image registration problems. Its components are designed to be modular to ease a fast and reliable creation of various registration pipelines tailored for case-specific applications. It was first developed by Stefan Klein and Marius Staring under the supervision of Josien P.W. Pluim at Image Sciences Institute (ISI). Its first version was command-line based, allowing the final user to employ scripts to automatically process big data-sets and deploy multiple registration pipelines with few lines of code. Nowadays, to further widen its audience, a version called SimpleElastix is also available, developed by Kasper Marstal, which allows the integration of elastix with high level languages, such as Python, Java, and R.

Gabrielle D. Allen is a British and American computational astrophysicist known for her work in astrophysical simulations and multi-messenger astronomy, and as one of the original developers of the Cactus Framework for parallel scientific computation. She is a professor of mathematics and statistics at the University of Wyoming.

References

  1. Allen, Gabrielle; Benger, Werner; Goodale, Tom; Hege, Hans-Christian; Lanfermann, Gerd; Merzky, Andre; Radke, Thomas; Seidel, Edward; Shalf, John (1999). "Solving Einstein's equations on supercomputers" (PDF). Computer. 32 (12): 52–58. doi:10.1109/2.809251 . Retrieved 2021-07-22.
  2. Allen, Gabrielle; Benger, Werner; Goodale, Tom; Hege, Hans-Christian; Lanfermann, Gerd; Merzky, Andre; Radke, Thomas; Seidel, Edward; Shalf, John (2000). "The cactus code: A problem solving environment for the grid" (PDF). Proceedings the Ninth International Symposium on High-Performance Distributed Computing. IEEE. pp. 253–260. doi:10.1109/HPDC.2000.868657 . Retrieved 2021-07-22.
  3. Goodale, Tom; Allen, Gabrielle; Lanfermann, Gerd; Massò, Joan; Radke, Thomas; Seidel, Edward; Shalf, John (2003). "The Cactus framework and toolkit: Design and applications". High Performance Computing for Computational Science - VECPAR 2002: 5th International Conference. LNCS, Vol. 2565. Springer. pp. 197–227. CiteSeerX   10.1.1.98.8838 . doi:10.1007/3-540-36569-9_13.
  4. Allen, Gabrielle; Goodale, Tom; Löffler, Frank; Rideout, David; Schnetter, Erik; Seidel, Erik L. (2010). "Component specification in the Cactus Framework: The Cactus Configuration Language". 11th IEEE/ACM International Conference on Grid Computing. IEEE. pp. 359–368. arXiv: 1009.1341 . doi:10.1109/GRID.2010.5698008.