Model order reduction (MOR) is a technique for reducing the computational complexity of mathematical models in numerical simulations. As such it is closely related to the concept of metamodeling, with applications in all areas of mathematical modelling.
Many modern mathematical models of real-life processes pose challenges when used in numerical simulations, due to complexity and large size (dimension). Model order reduction aims to lower the computational complexity of such problems, for example, in simulations of large-scale dynamical systems and control systems. By a reduction of the model's associated state space dimension or degrees of freedom, an approximation to the original model is computed which is commonly referred to as a reduced order model.
Reduced order models are useful in settings where it is often unfeasible to perform numerical simulations using the complete full order model. This can be due to limitations in computational resources or the requirements of the simulations setting, for instance real-time simulation settings or many-query settings in which a large number of simulations needs to be performed. [1] [2] Examples of Real-time simulation settings include control systems in electronics and visualization of model results while examples for a many-query setting can include optimization problems and design exploration. In order to be applicable to real-world problems, often the requirements of a reduced order model are: [3] [4]
It is interesting to note that in some cases (e.g. constrained lumping of polynomial differential equations) it is possible to have a null approximation error, resulting in an exact model order reduction. [5]
Contemporary model order reduction techniques can be broadly classified into 5 classes: [1] [6]
The simplified physics approach can be described to be analogous to the traditional mathematical modelling approach, in which a less complex description of a system is constructed based on assumptions and simplifications using physical insight or otherwise derived information. However, this approach is not often the topic of discussion in the context of model order reduction as it is a general method in science, engineering, and mathematics.
The remaining listed methods fall into the category of projection-based reduction. Projection-based reduction relies on the projection of either the model equations or the solution onto a basis of reduced dimensionality compared to the original solution space. Methods that also fall into this class but are perhaps less common are:
Model order reduction finds application within all fields involving mathematical modelling and many reviews [10] [13] exist for the topics of electronics, [16] fluid mechanics, [17] hydrodynamics, [18] structural mechanics, [7] MEMS, [19] Boltzmann equation, [8] and design optimization. [14] [20]
Current problems in fluid mechanics involve large dynamical systems representing many effects on many different scales. Computational fluid dynamics studies often involve models solving the Navier–Stokes equations with a number of degrees of freedom in the order of magnitude upwards of . The first usage of model order reduction techniques dates back to the work of Lumley in 1967, [21] where it was used to gain insight into the mechanisms and intensity of turbulence and large coherent structures present in fluid flow problems. Model order reduction also finds modern applications in aeronautics to model the flow over the body of aircraft. [22] An example can be found in Lieu et al [23] in which the full order model of an F16 fighter-aircraft with over 2.1 million degrees of freedom, was reduced to a model of just 90 degrees of freedom. Additionally reduced order modeling has been applied to study rheology in hemodynamics and the fluid–structure interaction between the blood flowing through the vascular system and the vascular walls. [24] [25]
Computational chemistry is a branch of chemistry that uses computer simulations to assist in solving chemical problems. It uses methods of theoretical chemistry incorporated into computer programs to calculate the structures and properties of molecules, groups of molecules, and solids. The importance of this subject stems from the fact that, with the exception of some relatively recent findings related to the hydrogen molecular ion, achieving an accurate quantum mechanical depiction of chemical systems analytically, or in a closed form, is not feasible. The complexity inherent in the many-body problem exacerbates the challenge of providing detailed descriptions of quantum mechanical systems. While computational results normally complement information obtained by chemical experiments, it can occasionally predict unobserved chemical phenomena.
Monte Carlo methods, or Monte Carlo experiments, are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. The underlying concept is to use randomness to solve problems that might be deterministic in principle. The name comes from the Monte Carlo Casino in Monaco, where the primary developer of the method, mathematician Stanislaw Ulam, was inspired by his uncle's gambling habits.
Computational physics is the study and implementation of numerical analysis to solve problems in physics. Historically, computational physics was the first application of modern computers in science, and is now a subset of computational science. It is sometimes regarded as a subdiscipline of theoretical physics, but others consider it an intermediate branch between theoretical and experimental physics — an area of study which supplements both theory and experiment.
Computational fluid dynamics (CFD) is a branch of fluid mechanics that uses numerical analysis and data structures to analyze and solve problems that involve fluid flows. Computers are used to perform the calculations required to simulate the free-stream flow of the fluid, and the interaction of the fluid with surfaces defined by boundary conditions. With high-speed supercomputers, better solutions can be achieved, and are often required to solve the largest and most complex problems. Ongoing research yields software that improves the accuracy and speed of complex simulation scenarios such as transonic or turbulent flows. Initial validation of such software is typically performed using experimental apparatus such as wind tunnels. In addition, previously performed analytical or empirical analysis of a particular problem can be used for comparison. A final validation is often performed using full-scale testing, such as flight tests.
Numerical methods for partial differential equations is the branch of numerical analysis that studies the numerical solution of partial differential equations (PDEs).
Finite-difference time-domain (FDTD) or Yee's method is a numerical analysis technique used for modeling computational electrodynamics. Since it is a time-domain method, FDTD solutions can cover a wide frequency range with a single simulation run, and treat nonlinear material properties in a natural way.
The Lorenz system is a system of ordinary differential equations first studied by mathematician and meteorologist Edward Lorenz. It is notable for having chaotic solutions for certain parameter values and initial conditions. In particular, the Lorenz attractor is a set of chaotic solutions of the Lorenz system. The term "butterfly effect" in popular media may stem from the real-world implications of the Lorenz attractor, namely that tiny changes in initial conditions evolve to completely different trajectories. This underscores that chaotic systems can be completely deterministic and yet still be inherently impractical or even impossible to predict over longer periods of time. For example, even the small flap of a butterfly's wings could set the earth's atmosphere on a vastly different trajectory, in which for example a hurricane occurs where it otherwise would have not. The shape of the Lorenz attractor itself, when plotted in phase space, may also be seen to resemble a butterfly.
Fluid–structure interaction (FSI) is the interaction of some movable or deformable structure with an internal or surrounding fluid flow. Fluid–structure interactions can be stable or oscillatory. In oscillatory interactions, the strain induced in the solid structure causes it to move such that the source of strain is reduced, and the structure returns to its former state only for the process to repeat.
In computational fluid dynamics, the immersed boundary method originally referred to an approach developed by Charles Peskin in 1972 to simulate fluid-structure (fiber) interactions. Treating the coupling of the structure deformations and the fluid flow poses a number of challenging problems for numerical simulations. In the immersed boundary method the fluid is represented in an Eulerian coordinate system and the structure is represented in Lagrangian coordinates. For Newtonian fluids governed by the Navier–Stokes equations, the fluid equations are
The finite element method (FEM) is a popular method for numerically solving differential equations arising in engineering and mathematical modeling. Typical problem areas of interest include the traditional fields of structural analysis, heat transfer, fluid flow, mass transport, and electromagnetic potential. Computers are usually used to perform the calculations required. With high-speed supercomputers, better solutions can be achieved, and are often required to solve the largest and most complex problems.
The homotopy analysis method (HAM) is a semi-analytical technique to solve nonlinear ordinary/partial differential equations. The homotopy analysis method employs the concept of the homotopy from topology to generate a convergent series solution for nonlinear systems. This is enabled by utilizing a homotopy-Maclaurin series to deal with the nonlinearities in the system.
Denis Louis Blackmore was an American mathematician and a full professor of the Department of Mathematical Sciences at New Jersey Institute of Technology. He was also one of the founding members of the Center for Applied Mathematics and Statistics at NJIT. Dr. Blackmore was mainly known for his many contributions in the fields of dynamical systems and differential topology. In addition to this, he had many contributions in other fields of applied mathematics, physics, biology, and engineering.
In applied mathematics, the finite pointset method (FPM) is a general approach for the numerical solution of problems in continuum mechanics, such as the simulation of fluid flows. In this approach the medium is represented by a finite set of points, each endowed with the relevant local properties of the medium such as density, velocity, pressure, and temperature.
MOOSE is an object-oriented C++ finite element framework for the development of tightly coupled multiphysics solvers from Idaho National Laboratory. MOOSE makes use of the PETSc non-linear solver package and libmesh to provide the finite element discretization.
Alexander Nikolaevich Gorban is a scientist of Russian origin, working in the United Kingdom. He is a professor at the University of Leicester, and director of its Mathematical Modeling Centre. Gorban has contributed to many areas of fundamental and applied science, including statistical physics, non-equilibrium thermodynamics, machine learning and mathematical biology.
Parareal is a parallel algorithm from numerical analysis and used for the solution of initial value problems. It was introduced in 2001 by Lions, Maday and Turinici. Since then, it has become one of the most widely studied parallel-in-time integration methods.
The proper generalized decomposition (PGD) is an iterative numerical method for solving boundary value problems (BVPs), that is, partial differential equations constrained by a set of boundary conditions, such as the Poisson's equation or the Laplace's equation.
Eleni Chatzi is a Greek civil engineer, researcher, and a professor and Chair of Structural Mechanics and Monitoring at the Department of Civil, Environmental and Geomatic Engineering of the Swiss Federal Institute of Technology in Zurich.
Physics-informed neural networks (PINNs), also referred to as Theory-Trained Neural Networks (TTNs), are a type of universal function approximators that can embed the knowledge of any physical laws that govern a given data-set in the learning process, and can be described by partial differential equations (PDEs). They overcome the low data availability of some biological and engineering systems that makes most state-of-the-art machine learning techniques lack robustness, rendering them ineffective in these scenarios. The prior knowledge of general physical laws acts in the training of neural networks (NNs) as a regularization agent that limits the space of admissible solutions, increasing the correctness of the function approximation. This way, embedding this prior information into a neural network results in enhancing the information content of the available data, facilitating the learning algorithm to capture the right solution and to generalize well even with a low amount of training examples.