In mathematics, in particular numerical analysis, the FETI method (finite element tearing and interconnect) is an iterative substructuring method for solving systems of linear equations from the finite element method for the solution of elliptic partial differential equations, in particular in computational mechanics [1] In each iteration, FETI requires the solution of a Neumann problem in each substructure and the solution of a coarse problem. The simplest version of FETI with no preconditioner (or only a diagonal preconditioner) in the substructure is scalable with the number of substructures [2] but the condition number grows polynomially with the number of elements per substructure. FETI with a (more expensive) preconditioner consisting of the solution of a Dirichlet problem in each substructure is scalable with the number of substructures and its condition number grows only polylogarithmically with the number of elements per substructure. [3] The coarse space in FETI consists of the nullspace on each substructure.
Apart from FETI Dual-Primal (FETI-DP, see below), several extensions have been developed to solve particular physical problems, as FETI Helmholtz (FETI-H), [4] [5] FETI for quasi-incompressible problems, [6] and FETI Contact (FETI-C). [7] [8] [9]
Computational fluid dynamics (CFD) is a branch of fluid mechanics that uses numerical analysis and data structures to analyze and solve problems that involve fluid flows. Computers are used to perform the calculations required to simulate the free-stream flow of the fluid, and the interaction of the fluid with surfaces defined by boundary conditions. With high-speed supercomputers, better solutions can be achieved, and are often required to solve the largest and most complex problems. Ongoing research yields software that improves the accuracy and speed of complex simulation scenarios such as transonic or turbulent flows. Initial validation of such software is typically performed using experimental apparatus such as wind tunnels. In addition, previously performed analytical or empirical analysis of a particular problem can be used for comparison. A final validation is often performed using full-scale testing, such as flight tests.
Numerical methods for partial differential equations is the branch of numerical analysis that studies the numerical solution of partial differential equations (PDEs).
In numerical analysis, a multigrid method is an algorithm for solving differential equations using a hierarchy of discretizations. They are an example of a class of techniques called multiresolution methods, very useful in problems exhibiting multiple scales of behavior. For example, many basic relaxation methods exhibit different rates of convergence for short- and long-wavelength components, suggesting these different scales be treated differently, as in a Fourier analysis approach to multigrid. MG methods can be used as solvers as well as preconditioners.
In mathematics a radial basis function (RBF) is a real-valued function whose value depends only on the distance between the input and some fixed point, either the origin, so that , or some other fixed point , called a center, so that . Any function that satisfies the property is a radial function. The distance is usually Euclidean distance, although other metrics are sometimes used. They are often used as a collection which forms a basis for some function space of interest, hence the name.
Fluid–structure interaction (FSI) is the interaction of some movable or deformable structure with an internal or surrounding fluid flow. Fluid–structure interactions can be stable or oscillatory. In oscillatory interactions, the strain induced in the solid structure causes it to move such that the source of strain is reduced, and the structure returns to its former state only for the process to repeat.
Johann Hadji Argyris FRS was a Greek pioneer of computer applications in science and engineering, among the creators of the finite element method (FEM), and later Professor at the University of Stuttgart and Director of the Institute of Structural Mechanics and Dynamics in Aerospace Engineering.
In numerical analysis, BDDC (balancing domain decomposition by constraints) is a domain decomposition method for solving large symmetric, positive definite systems of linear equations that arise from the finite element method. BDDC is used as a preconditioner to the conjugate gradient method. A specific version of BDDC is characterized by the choice of coarse degrees of freedom, which can be values at the corners of the subdomains, or averages over the edges or the faces of the interface between the subdomains. One application of the BDDC preconditioner then combines the solution of local problems on each subdomains with the solution of a global coarse problem with the coarse degrees of freedom as the unknowns. The local problems on different subdomains are completely independent of each other, so the method is suitable for parallel computing. With a proper choice of the coarse degrees of freedom (corners in 2D, corners plus edges or corners plus faces in 3D) and with regular subdomain shapes, the condition number of the method is bounded when increasing the number of subdomains, and it grows only very slowly with the number of elements per subdomain. Thus the number of iterations is bounded in the same way, and the method scales well with the problem size and the number of subdomains.
In mathematics, numerical analysis, and numerical partial differential equations, domain decomposition methods solve a boundary value problem by splitting it into smaller boundary value problems on subdomains and iterating to coordinate the solution between adjacent subdomains. A coarse problem with one or few unknowns per subdomain is used to further coordinate the solution between the subdomains globally. The problems on the subdomains are independent, which makes domain decomposition methods suitable for parallel computing. Domain decomposition methods are typically used as preconditioners for Krylov space iterative methods, such as the conjugate gradient method, GMRES, and LOBPCG.
The FETI-DP method is a domain decomposition method that enforces equality of the solution at subdomain interfaces by Lagrange multipliers except at subdomain corners, which remain primal variables. The first mathematical analysis of the method was provided by Mandel and Tezaur. The method was further improved by enforcing the equality of averages across the edges or faces on subdomain interfaces which is important for parallel scalability for 3D problems. FETI-DP is a simplification and a better performing version of FETI. The eigenvalues of FETI-DP are same as those of BDDC, except for the eigenvalue equal to one, and so the performance of FETI-DP and BDDC is essentially same.
In numerical analysis, the balancing domain decomposition method (BDD) is an iterative method to find the solution of a symmetric positive definite system of linear algebraic equations arising from the finite element method. In each iteration, it combines the solution of local problems on non-overlapping subdomains with a coarse problem created from the subdomain nullspaces. BDD requires only solution of subdomain problems rather than access to the matrices of those problems, so it is applicable to situations where only the solution operators are available, such as in oil reservoir simulation by mixed finite elements. In its original formulation, BDD performs well only for 2nd order problems, such elasticity in 2D and 3D. For 4th order problems, such as plate bending, it needs to be modified by adding to the coarse problem special basis functions that enforce continuity of the solution at subdomain corners, which makes it however more expensive. The BDDC method uses the same corner basis functions as, but in an additive rather than multiplicative fashion. The dual counterpart to BDD is FETI, which enforces the equality of the solution between the subdomain by Lagrange multipliers. The base versions of BDD and FETI are not mathematically equivalent, though a special version of FETI designed to be robust for hard problems has the same eigenvalues and thus essentially the same performance as BDD.
In mathematics, Neumann–Neumann methods are domain decomposition preconditioners named so because they solve a Neumann problem on each subdomain on both sides of the interface between the subdomains. Just like all domain decomposition methods, so that the number of iterations does not grow with the number of subdomains, Neumann–Neumann methods require the solution of a coarse problem to provide global communication. The balancing domain decomposition is a Neumann–Neumann method with a special kind of coarse problem.
In numerical analysis, mortar methods are discretization methods for partial differential equations, which use separate finite element discretization on nonoverlapping subdomains. The meshes on the subdomains do not match on the interface, and the equality of the solution is enforced by Lagrange multipliers, judiciously chosen to preserve the accuracy of the solution. Mortar discretizations lend themselves naturally to the solution by iterative domain decomposition methods such as FETI and balancing domain decomposition In the engineering practice in the finite element method, continuity of solutions between non-matching subdomains is implemented by multiple-point constraints.
Charbel Farhat is the Vivian Church Hoff Professor of Aircraft Structures in the School of Engineering at Stanford University, where from 2008 to 2023, he chaired the Department of Aeronautics and Astronautics. From 2022 to 2023, he chaired this department as the inaugural James and Anna Marie Spilker Chair of Aeronautics and Astronautics. He is also Professor in the Institute for Computational and Mathematical Engineering, and Director of the Stanford-King Abdulaziz City for Science and Technology Center of Excellence for Aeronautics and Astronautics. From 2017 to 2023, he served on the Space Technology Industry-Government-University Roundtable; from 2015 to 2019, he served on the United States Air Force Scientific Advisory Board (SAB); from 2008 to 2018, he served on the United States Bureau of Industry and Security's Emerging Technology and Research Advisory Committee (ETRAC) at the United States Department of Commerce; and from 2007 to 2018, he served as the Director of the Army High Performance Computing Research Center at Stanford University. He was designated by the US Navy recruiters as a Primary Key-Influencer and flew with the Blue Angels during Fleet Week 2014.
In numerical analysis, coarse problem is an auxiliary system of equations used in an iterative method for the solution of a given larger system of equations. A coarse problem is basically a version of the same problem at a lower resolution, retaining its essential characteristics, but with fewer variables. The purpose of the coarse problem is to propagate information throughout the whole problem globally.
Model order reduction (MOR) is a technique for reducing the computational complexity of mathematical models in numerical simulations. As such it is closely related to the concept of metamodeling, with applications in all areas of mathematical modelling.
The finite point method (FPM) is a meshfree method for solving partial differential equations (PDEs) on scattered distributions of points. The FPM was proposed in the mid-nineties in, and with the purpose to facilitate the solution of problems involving complex geometries, free surfaces, moving boundaries and adaptive refinement. Since then, the FPM has evolved considerably, showing satisfactory accuracy and capabilities to deal with different fluid and solid mechanics problems.
In numerical analysis, multi-time-step integration, also referred to as multiple-step or asynchronous time integration, is a numerical time-integration method that uses different time-steps or time-integrators for different parts of the problem. There are different approaches to multi-time-step integration. They are based on domain decomposition and can be classified into strong (monolithic) or weak (staggered) schemes. Using different time-steps or time-integrators in the context of a weak algorithm is rather straightforward, because the numerical solvers operate independently. However, this is not the case in a strong algorithm. In the past few years a number of research articles have addressed the development of strong multi-time-step algorithms. In either case, strong or weak, the numerical accuracy and stability needs to be carefully studied. Other approaches to multi-time-step integration in the context of operator splitting methods have also been developed; i.e., multi-rate GARK method and multi-step methods for molecular dynamics simulations.
The proper generalized decomposition (PGD) is an iterative numerical method for solving boundary value problems (BVPs), that is, partial differential equations constrained by a set of boundary conditions, such as the Poisson's equation or the Laplace's equation.
Michel Bercovier is a French-Israeli Professor (Emeritus) of Scientific Computing and Computer Aided Design (CAD) in The Rachel and Selim Benin School of Computer Science and Engineering at the Hebrew University of Jerusalem. Bercovier is also the head of the School of Computer Science at the Hadassah Academic College, Jerusalem.