Center for the Simulation of Advanced Rockets

Last updated
CSAR
CSAR logo.jpg
Founded1997
LocationUniversity of Illinois at Urbana-Champaign
DepartmentComputational Science and Engineering
GoalDevelop accurate computation models of solid-state rocket propellant systems
StaffApprox. 80 Faculty, Staff, and Students [1]
Research AreasFluids and Combustion

Structures and Materials

Computer Science

System Integration

Uncertainty Integration

The Center for Simulation of Advanced Rockets (CSAR) is an interdisciplinary research group at the University of Illinois at Urbana-Champaign, and is part of the United States Department of Energy's Advanced Simulation and Computing Program. CSAR's goal is to accurately predict the performance, reliability, and safety of solid propellant rockets. [2]

Contents

CSAR was founded in 1997 as part of the Department of Energy's Advanced Simulation and Computing Program. The goal of this program is to "enable accurate prediction of the performance, reliability, and safety of complex physical systems through computational simulation." CSAR extends this motive into the realm of solid rocket propellants, specifically those used by the Space Shuttle. [1]

CSAR aims to be able to simulate entire rocket systems, under normal and abnormal situations. This involves highly accurate modeling of components and dynamics of fuel flow and other environmental factors. Modeling this requires large computational power, on the order of thousands of processors. Development of the computational infrastructure is critical in achieving their goal. [1]

Areas of research

There are several fields researched by CSAR. [3] Physical simulations are implemented in CSAR's Rocstar software suite.

Computation environment

Physical simulations are performed using CSAR's Rocstar suite of numerical solver applications. Rocstar was built by CSAR, and is designed to run efficiently on massively parallel computers. Implementation of Rocstar is done in MPI and is entirely compatible with Adaptive MPI. Rocstar is currently in its third version, Rocstar 3. Documentation on using Rocstar 3 is available through a User's Guide.

CSAR uses a number of supercomputing resources for their simulations. Along with CSAR, the National Center for Supercomputing Applications is located at the University of Illinois at Urbana-Champaign. CSAR takes advantage of the computing environment provided by NCSA for many simulations. The university's department of Computational Science and Engineering has a supercomputing cluster known as Turing, which is also utilized by CSAR. [4]

The computation environment used by CSAR takes advantage of work done by the University of Illinois' Parallel Programming Lab, in particular Charm++ and Adaptive MPI. [5] These parallel programming frameworks allow for application development that scales easily to thousands of processors, which allows for highly complex computations to finish quickly. The Run-time system employed by both Charm++ and AMPI has two primary features that are used by CSAR's software: load-balancing, which helps improve performance by keeping work distributed evenly across all processors, and checkpointing, which allows a lengthy computation to be saved and restarted without having to start over.

Using these highly parallel tools, CSAR's developers have built a number of components which are able to simulate various physical phenomena related to rocket propulsion. Combined together, they provide a complete simulation environment. Below is a list of all the Rocstar modules and links to their respective users guides.

Rocstar Modules [6]
FieldNameUser's ManualDescription
CombustionRocburn
FluidsRocfloMP
RocfluMP
Roctpart
Rocturb
Rocrad
SolidsRocfrac
Rocsolid
Computer ScienceRocman
Roccom
Rocface
Rocblas
Rocin
RocHDF
Rocmop
Rocrem
Rocketeer Visualization tool for complex 2-D and 3-D data sets.
UtilitiesRocbuild
Roctest
Rocdiff
Rocprep

Events

Related Research Articles

Supercomputer Extremely powerful computer for its era

A supercomputer is a computer with a high level of performance as compared to a general-purpose computer. The performance of a supercomputer is commonly measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). Since 2017, there are supercomputers which can perform over 1017 FLOPS (a hundred quadrillion FLOPS, 100 petaFLOPS or 100 PFLOPS). Since November 2017, all of the world's fastest 500 supercomputers run Linux-based operating systems. Additional research is being conducted in the United States, the European Union, Taiwan, Japan, and China to build faster, more powerful and technologically superior exascale supercomputers.

Earth Simulator

The Earth Simulator (ES), developed by the Japanese government's initiative "Earth Simulator Project", was a highly parallel vector supercomputer system for running global climate models to evaluate the effects of global warming and problems in solid earth geophysics. The system was developed for Japan Aerospace Exploration Agency, Japan Atomic Energy Research Institute, and Japan Marine Science and Technology Center (JAMSTEC) in 1997. Construction started in October 1999, and the site officially opened on 11 March 2002. The project cost 60 billion yen.

Message Passing Interface (MPI) is a standardized and portable message-passing standard designed to function on parallel computing architectures. The MPI standard defines the syntax and semantics of library routines that are useful to a wide range of users writing portable message-passing programs in C, C++, and Fortran. There are several open-source MPI implementations, which fostered the development of a parallel software industry, and encouraged development of portable and scalable large-scale parallel applications.

Computational science, also known as scientific computing or scientific computation (SC), is a rapidly growing field that uses advanced computing capabilities to understand and solve complex problems. It is an area of science which spans many disciplines, but at its core, it involves the development of models and simulations to understand natural systems.

General-purpose computing on graphics processing units is the use of a graphics processing unit (GPU), which typically handles computation only for computer graphics, to perform computation in applications traditionally handled by the central processing unit (CPU). The use of multiple video cards in one computer, or large numbers of graphics chips, further parallelizes the already parallel nature of graphics processing.

NASA Advanced Supercomputing Division

The NASA Advanced Supercomputing (NAS) Division is located at NASA Ames Research Center, Moffett Field in the heart of Silicon Valley in Mountain View, California. It has been the major supercomputing and modeling and simulation resource for NASA missions in aerodynamics, space exploration, studies in weather patterns and ocean currents, and space shuttle and aircraft design and development for almost forty years.

Multiscale modeling or multiscale mathematics is the field of solving problems which have important features at multiple scales of time and/or space. Important problems include multiscale modeling of fluids, solids, polymers, proteins, nucleic acids as well as various physical and chemical phenomena.

Tinker, stylized as TINKER, is a computer software application for molecular dynamics simulation with a complete and general package for molecular mechanics and molecular dynamics, with some special features for biopolymers. The core of the package is a modular set of callable routines which allow manipulating coordinates and evaluating potential energy and derivatives via straightforward means.

The Nonequilibrium Gas and Plasma Dynamics Laboratory (NGPDL) at the Aerospace Engineering Department of the University of Colorado Boulder is headed by Professor Iain D. Boyd and performs research of nonequilibrium gases and plasmas involving the development of physical models for various gas systems of interest, numerical algorithms on the latest supercomputers, and the application of challenging flows for several exciting projects. The lab places a great deal of emphasis on comparison of simulation with external experimental and theoretical results, having ongoing collaborative studies with colleagues at the University of Colorado Boulder, other universities, and government laboratories such as NASA, United States Air Force Research Laboratory, and the United States Department of Defense.

Computational engineering

Computational science and engineering (CSE) is a relatively new discipline that deals with the development and application of computational models and simulations, often coupled with high-performance computing, to solve complex physical problems arising in engineering analysis and design as well as natural phenomena. CSE has been described as the "third mode of discovery".

This is a list of computer programs that are predominantly used for molecular mechanics calculations.

Computer cluster

A computer cluster is a set of computers that work together so that they can be viewed as a single system. Unlike grid computers, computer clusters have each node set to perform the same task, controlled and scheduled by software.

Integrated computational materials engineering (ICME) involves the integration of experimental results, design models, simulations, and other computational data related to a variety of materials used in multiscale engineering and design. Central to the achievement of ICME goals has been the creation of a cyberinfrastructure, a Web-based, collaborative platform which provides the ability to accumulate, organize and disseminate knowledge pertaining to materials science and engineering to facilitate this information being broadly utilized, enhanced, and expanded.

Polish Grid Infrastructure PL-Grid, a nationwide computing infrastructure, built in 2009-2011, under the scientific project PL-Grid - Polish Infrastructure for Supporting Computational Science in the European Research Space. Its purpose was to enable scientific research based on advanced computer simulations and large-scale computations using the computer clusters, and to provide convenient access to the computer resources for research teams, also outside the communities, in which the High Performance Computing centers operate.

Message passing in computer clusters

Message passing is an inherent element of all computer clusters. All computer clusters, ranging from homemade Beowulfs to some of the fastest supercomputers in the world, rely on message passing to coordinate the activities of the many nodes they encompass. Message passing in computer clusters built with commodity servers and switches is used by virtually every internet service.

Supercomputing in Pakistan Overview of supercomputing in Pakistan

The high performance supercomputing program started in mid-to-late 1980s in Pakistan. Supercomputing is a recent area of Computer science in which Pakistan has made progress, driven in part by the growth of the information technology age in the country. Developing on the ingenious supercomputer program started in 1980s when the deployment of the Cray supercomputers was initially denied.

George Karniadakis American mathematician

George Em Karniadakis is a professor of applied mathematics at Brown University. He is a Greek-American researcher who is known for his wide-spectrum work on high-dimensional stochastic modeling and multiscale simulations of physical and biological systems, and is a pioneer of spectral/hp-element methods for fluids in complex geometries, general polynomial chaos for uncertainty quantification, and the Sturm-Liouville theory for partial differential equations and fractional calculus.

ACM SIGHPC

ACM SIGHPC is the Association for Computing Machinery's Special Interest Group on High Performance Computing, an international community of students, faculty, researchers, and practitioners working on research and in professional practice related to supercomputing, high-end computers, and cluster computing. The organization co-sponsors international conferences related to high performance and scientific computing, including: SC, the International Conference for High Performance Computing, Networking, Storage and Analysis; the Platform for Advanced Scientific Computing (PASC) Conference; and PPoPP, the Symposium on Principles and Practice of Parallel Programming.

Computational materials science and engineering uses modeling, simulation, theory, and informatics to understand materials. The main goals include discovering new materials, determining material behavior and mechanisms, explaining experiments, and exploring materials theories. It is analogous to computational chemistry and computational biology as an increasingly important subfield of materials science.

References

  1. 1 2 3 About CSAR Archived May 13, 2008, at the Wayback Machine Retrieved October 10, 2008
  2. CSAR Homepage Archived October 6, 2008, at the Wayback Machine Retrieved October 10, 2008
  3. Basic Research at CSAR Archived May 10, 2008, at the Wayback Machine Retrieved October 10, 2008
  4. CSAR Computing Archived February 1, 2009, at the Wayback Machine Retrieved October 11, 2008
  5. Parallel Programming Lab: Rocket Simulation Retrieved October 11, 2008
  6. CSAR Software Documentation Archived May 13, 2008, at the Wayback Machine Retrieved October 15, 2008
  7. International Symposium on Solid Rocket Modeling and Simulation Archived May 13, 2008, at the Wayback Machine Retrieved October 10, 2008