Model order reduction (MOR) is a technique for reducing the computational complexity of mathematical models in numerical simulations. As such it is closely related to the concept of metamodeling, with applications in all areas of mathematical modelling.
Many modern mathematical models of real-life processes pose challenges when used in numerical simulations, due to complexity and large size (dimension). Model order reduction aims to lower the computational complexity of such problems, for example, in simulations of large-scale dynamical systems and control systems. By a reduction of the model's associated state space dimension or degrees of freedom, an approximation to the original model is computed which is commonly referred to as a reduced order model.
Reduced order models are useful in settings where it is often unfeasible to perform numerical simulations using the complete full order model. This can be due to limitations in computational resources or the requirements of the simulations setting, for instance real-time simulation settings or many-query settings in which a large number of simulations needs to be performed. [1] [2] Examples of Real-time simulation settings include control systems in electronics and visualization of model results while examples for a many-query setting can include optimization problems and design exploration. In order to be applicable to real-world problems, often the requirements of a reduced order model are: [3] [4]
It is interesting to note that in some cases (e.g. constrained lumping of polynomial differential equations) it is possible to have a null approximation error, resulting in an exact model order reduction. [5]
Contemporary model order reduction techniques can be broadly classified into 5 classes: [1] [6]
The simplified physics approach can be described to be analogous to the traditional mathematical modelling approach, in which a less complex description of a system is constructed based on assumptions and simplifications using physical insight or otherwise derived information. However, this approach is not often the topic of discussion in the context of model order reduction as it is a general method in science, engineering, and mathematics.
The remaining listed methods fall into the category of projection-based reduction. Projection-based reduction relies on the projection of either the model equations or the solution onto a basis of reduced dimensionality compared to the original solution space. Methods that also fall into this class but are perhaps less common are:
There are also nonintrusive model reduction methods that learn reduced models from data without requiring knowledge about the governing equations and internals of the full, high-fidelity model. Nonintrusive methods learn a low-dimensional approximation space or manifold and the reduced operators that represent the reduced dynamics from data. Methods that are non-intrusive include:
Model order reduction finds application within all fields involving mathematical modelling and many reviews [10] [13] exist for the topics of electronics, [23] fluid mechanics, [24] hydrodynamics, [25] structural mechanics, [7] MEMS, [26] Boltzmann equation, [8] and design optimization. [14] [27]
Current problems in fluid mechanics involve large dynamical systems representing many effects on many different scales. Computational fluid dynamics studies often involve models solving the Navier–Stokes equations with a number of degrees of freedom in the order of magnitude upwards of . The first usage of model order reduction techniques dates back to the work of Lumley in 1967, [28] where it was used to gain insight into the mechanisms and intensity of turbulence and large coherent structures present in fluid flow problems. Model order reduction also finds modern applications in aeronautics to model the flow over the body of aircraft. [29] An example can be found in Lieu et al [30] in which the full order model of an F16 fighter-aircraft with over 2.1 million degrees of freedom, was reduced to a model of just 90 degrees of freedom. Additionally reduced order modeling has been applied to study rheology in hemodynamics and the fluid–structure interaction between the blood flowing through the vascular system and the vascular walls. [31] [32]
Computational chemistry is a branch of chemistry that uses computer simulations to assist in solving chemical problems. It uses methods of theoretical chemistry incorporated into computer programs to calculate the structures and properties of molecules, groups of molecules, and solids. The importance of this subject stems from the fact that, with the exception of some relatively recent findings related to the hydrogen molecular ion, achieving an accurate quantum mechanical depiction of chemical systems analytically, or in a closed form, is not feasible. The complexity inherent in the many-body problem exacerbates the challenge of providing detailed descriptions of quantum mechanical systems. While computational results normally complement information obtained by chemical experiments, it can occasionally predict unobserved chemical phenomena.
Monte Carlo methods, or Monte Carlo experiments, are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. The underlying concept is to use randomness to solve problems that might be deterministic in principle. The name comes from the Monte Carlo Casino in Monaco, where the primary developer of the method, mathematician Stanisław Ulam, was inspired by his uncle's gambling habits.
Computational physics is the study and implementation of numerical analysis to solve problems in physics. Historically, computational physics was the first application of modern computers in science, and is now a subset of computational science. It is sometimes regarded as a subdiscipline of theoretical physics, but others consider it an intermediate branch between theoretical and experimental physics — an area of study which supplements both theory and experiment.
Computational fluid dynamics (CFD) is a branch of fluid mechanics that uses numerical analysis and data structures to analyze and solve problems that involve fluid flows. Computers are used to perform the calculations required to simulate the free-stream flow of the fluid, and the interaction of the fluid with surfaces defined by boundary conditions. With high-speed supercomputers, better solutions can be achieved, and are often required to solve the largest and most complex problems. Ongoing research yields software that improves the accuracy and speed of complex simulation scenarios such as transonic or turbulent flows. Initial validation of such software is typically performed using experimental apparatus such as wind tunnels. In addition, previously performed analytical or empirical analysis of a particular problem can be used for comparison. A final validation is often performed using full-scale testing, such as flight tests.
Computational science, also known as scientific computing, technical computing or scientific computation (SC), is a division of science, and more specifically the Computer Sciences, which uses advanced computing capabilities to understand and solve complex physical problems. While this discussion typically extenuates into Visual Computation, this research field of study will typically include the following research categorizations.
Numerical methods for partial differential equations is the branch of numerical analysis that studies the numerical solution of partial differential equations (PDEs).
Fluid–structure interaction (FSI) is the interaction of some movable or deformable structure with an internal or surrounding fluid flow. Fluid–structure interactions can be stable or oscillatory. In oscillatory interactions, the strain induced in the solid structure causes it to move such that the source of strain is reduced, and the structure returns to its former state only for the process to repeat.
In the field of numerical analysis, meshfree methods are those that do not require connection between nodes of the simulation domain, i.e. a mesh, but are rather based on interaction of each node with all its neighbors. As a consequence, original extensive properties such as mass or kinetic energy are no longer assigned to mesh elements but rather to the single nodes. Meshfree methods enable the simulation of some otherwise difficult types of problems, at the cost of extra computing time and programming effort. The absence of a mesh allows Lagrangian simulations, in which the nodes can move according to the velocity field.
A phase-field model is a mathematical model for solving interfacial problems. It has mainly been applied to solidification dynamics, but it has also been applied to other situations such as viscous fingering, fracture mechanics, hydrogen embrittlement, and vesicle dynamics.
Finite element method (FEM) is a popular method for numerically solving differential equations arising in engineering and mathematical modeling. Typical problem areas of interest include the traditional fields of structural analysis, heat transfer, fluid flow, mass transport, and electromagnetic potential. Computers are usually used to perform the calculations required. With high-speed supercomputers, better solutions can be achieved and are often required to solve the largest and most complex problems.
The moving particle semi-implicit (MPS) method is a computational method for the simulation of incompressible free surface flows. It is a macroscopic, deterministic particle method developed by Koshizuka and Oka (1996).
The homotopy analysis method (HAM) is a semi-analytical technique to solve nonlinear ordinary/partial differential equations. The homotopy analysis method employs the concept of the homotopy from topology to generate a convergent series solution for nonlinear systems. This is enabled by utilizing a homotopy-Maclaurin series to deal with the nonlinearities in the system.
Denis Louis Blackmore was an American mathematician and a full professor of the Department of Mathematical Sciences at New Jersey Institute of Technology. He was also one of the founding members of the Center for Applied Mathematics and Statistics at NJIT. Dr. Blackmore was mainly known for his many contributions in the fields of dynamical systems and differential topology. In addition to this, he had many contributions in other fields of applied mathematics, physics, biology, and engineering.
In applied mathematics, the finite pointset method (FPM) is a general approach for the numerical solution of problems in continuum mechanics, such as the simulation of fluid flows. In this approach the medium is represented by a finite set of points, each endowed with the relevant local properties of the medium such as density, velocity, pressure, and temperature.
MOOSE is an object-oriented C++ finite element framework for the development of tightly coupled multiphysics solvers from Idaho National Laboratory. MOOSE makes use of the PETSc non-linear solver package and libmesh to provide the finite element discretization.
Alexander Nikolaevich Gorban is a scientist of Russian origin, working in the United Kingdom. He is a professor at the University of Leicester, and director of its Mathematical Modeling Centre. Gorban has contributed to many areas of fundamental and applied science, including statistical physics, non-equilibrium thermodynamics, machine learning and mathematical biology.
The finite point method (FPM) is a meshfree method for solving partial differential equations (PDEs) on scattered distributions of points. The FPM was proposed in the mid-nineties in, and with the purpose to facilitate the solution of problems involving complex geometries, free surfaces, moving boundaries and adaptive refinement. Since then, the FPM has evolved considerably, showing satisfactory accuracy and capabilities to deal with different fluid and solid mechanics problems.
Eleni Chatzi is a Greek civil engineer, researcher, and a professor and Chair of Structural Mechanics and Monitoring at the Department of Civil, Environmental and Geomatic Engineering of the Swiss Federal Institute of Technology in Zurich.
Physics-informed neural networks (PINNs), also referred to as Theory-Trained Neural Networks (TTNs), are a type of universal function approximators that can embed the knowledge of any physical laws that govern a given data-set in the learning process, and can be described by partial differential equations (PDEs). Low data availability for some biological and engineering problems limit the robustness of conventional machine learning models used for these applications. The prior knowledge of general physical laws acts in the training of neural networks (NNs) as a regularization agent that limits the space of admissible solutions, increasing the generalizability of the function approximation. This way, embedding this prior information into a neural network results in enhancing the information content of the available data, facilitating the learning algorithm to capture the right solution and to generalize well even with a low amount of training examples.