Abstract additive Schwarz method

Last updated

In mathematics, the abstract additive Schwarz method, named after Hermann Schwarz, is an abstract version of the additive Schwarz method for boundary value problems on partial differential equations, formulated only in terms of linear algebra without reference to domains, subdomains, etc. Many if not all domain decomposition methods can be cast as abstract additive Schwarz method, which is often the first and most convenient approach to their analysis. [1]

Mathematics field of study concerning quantity, patterns and change

Mathematics includes the study of such topics as quantity, structure, space, and change.

Hermann Schwarz German mathematician

Karl Hermann Amandus Schwarz was a German mathematician, known for his work in complex analysis.

In mathematics, the additive Schwarz method, named after Hermann Schwarz, solves a boundary value problem for a partial differential equation approximately by splitting it into boundary value problems on smaller domains and adding the results.

Related Research Articles

Numerical analysis study of algorithms that use numerical approximation for the problems of mathematical analysis

Numerical analysis is the study of algorithms that use numerical approximation for the problems of mathematical analysis. Numerical analysis naturally finds application in all fields of engineering and the physical sciences, but in the 21st century also the life sciences, social sciences, medicine, business and even the arts have adopted elements of scientific computations. As an aspect of mathematics and computer science that generates, analyzes, and implements algorithms, the growth in power and the revolution in computing has raised the use of realistic mathematical models in science and engineering, and complex numerical analysis is required to provide solutions to these more involved models of the world. Ordinary differential equations appear in celestial mechanics ; numerical linear algebra is important for data analysis; stochastic differential equations and Markov chains are essential in simulating living cells for medicine and biology.

Partial differential equation differential equation that contains unknown multivariable functions and their partial derivatives

In mathematics, a partial differential equation (PDE) is a differential equation that contains beforehand unknown multivariable functions and their partial derivatives. PDEs are used to formulate problems involving functions of several variables, and are either solved by hand, or used to create a computer model. A special case is ordinary differential equations (ODEs), which deal with functions of a single variable and their derivatives.

Superposition principle fundamental physics principle stating that physical solutions of linear systems are linear

The superposition principle, also known as superposition property, states that, for all linear systems, the net response caused by two or more stimuli is the sum of the responses that would have been caused by each stimulus individually. So that if input A produces response X and input B produces response Y then input produces response.

Numerical partial differential equations is the branch of numerical analysis that studies the numerical solution of partial differential equations (PDEs).

In numerical analysis, Stone's method, also known as the strongly implicit procedure or SIP, is an algorithm for solving a sparse linear system of equations. The method uses an incomplete LU decomposition, which approximates the exact LU decomposition, to get an iterative solution of the problem. The method is named after Herbert L. Stone, who proposed it in 1968.

In mathematics, in particular numerical analysis, the FETI method is an iterative substructuring method for solving systems of linear equations from the finite element method for the solution of elliptic partial differential equations, in particular in computational mechanics In each iteration, FETI requires the solution of a Neumann problem in each substructure and the solution of a coarse problem. The simplest version of FETI with no preconditioner in the substructure is scalable with the number of substructures but the condition number grows polynomially with the number of elements per substructure. FETI with a preconditioner consisting of the solution of a Dirichlet problem in each substructure is scalable with the number of substructures and its condition number grows only polylogarithmically with the number of elements per substructure. The coarse space in FETI consists of the nullspace on each substructure.

In numerical analysis, BDDC is a domain decomposition method for solving large symmetric, positive definite systems of linear equations that arise from the finite element method. BDDC is used as a preconditioner to the conjugate gradient method. A specific version of BDDC is characterized by the choice of coarse degrees of freedom, which can be values at the corners of the subdomains, or averages over the edges or the faces of the interface between the subdomains. One application of the BDDC preconditioner then combines the solution of local problems on each subdomains with the solution of a global coarse problem with the coarse degrees of freedom as the unknowns. The local problems on different subdomains are completely independent of each other, so the method is suitable for parallel computing. With a proper choice of the coarse degrees of freedom and with regular subdomain shapes, the condition number of the method is bounded when increasing the number of subdomains, and it grows only very slowly with the number of elements per subdomain. Thus the number of iterations is bounded in the same way, and the method scales well with the problem size and the number of subdomains.

Domain decomposition methods type of numerical methods

In mathematics, numerical analysis, and numerical partial differential equations, domain decomposition methods solve a boundary value problem by splitting it into smaller boundary value problems on subdomains and iterating to coordinate the solution between adjacent subdomains. A coarse problem with one or few unknowns per subdomain is used to further coordinate the solution between the subdomains globally. The problems on the subdomains are independent, which makes domain decomposition methods suitable for parallel computing. Domain decomposition methods are typically used as preconditioners for Krylov space iterative methods, such as the conjugate gradient method or GMRES.

In mathematics, the Schwarz alternating method or alternating process is an iterative method introduced in 1869-1870 by Hermann Schwarz in the theory of conformal mapping. Given two overlapping regions in the complex plane in each of which the Dirichlet problem could be solved, Schwarz described an iterative method for solving the Dirichlet problem in their union, provided their intersection was suitably well behaved. This was one of several constructive techniques of conformal mapping developed by Schwarz as a contribution to the problem of uniformization, posed by Riemann in the 1850s and first resolved rigorously by Koebe and Poincaré in 1907. It furnished a scheme for uniformizing the union of two regions knowing how to uniformize each of them separately, provided their intersection was topologically a disk or an annulus. From 1870 onwards Carl Neumann also contributed to this theory.

In mathematics, Neumann–Neumann methods are domain decomposition preconditioners named so because they solve a Neumann problem on each subdomain on both sides of the interface between the subdomains. Just like all domain decomposition methods, so that the number of iterations does not grow with the number of subdomains, Neumann–Neumann methods require the solution of a coarse problem to provide global communication. The balancing domain decomposition is a Neumann–Neumann method with a special kind of coarse problem.

In mathematics, the Neumann–Dirichlet method is a domain decomposition preconditioner which involves solving Neumann boundary value problem on one subdomain and Dirichlet boundary value problem on another, adjacent across the interface between the subdomains. On a problem with many subdomains organized in a rectangular mesh, the subdomains are assigned Neumann or Dirichlet problems in a checkerboard fashion.

In numerical analysis, mortar methods are discretization methods for partial differential equations, which use separate finite element discretization on nonoverlapping subdomains. The meshes on the subdomains do not match on the interface, and the equality of the solution is enforced by Lagrange multipliers, judiciously chosen to preserve the accuracy of the solution. Mortar discretizations lend themselves naturally to the solution by iterative domain decomposition methods such as FETI and balancing domain decomposition In the engineering practice in the finite element method, continuity of solutions between non-matching subdomains is implemented by multiple-point constraints.

Decomposition method is a generic term for solutions of various problems and design of algorithms in which the basic idea is to decompose the problem into subproblems. The term may specifically refer to one of the following.

In numerical analysis, coarse problem is an auxiliary system of equations used in an iterative method for the solution of a given larger system of equations. A coarse problem is basically a version of the same problem at a lower resolution, retaining its essential characteristics, but with fewer variables. The purpose of the coarse problem is to propagate information throughout the whole problem globally.

In mathematics, a Poincaré–Steklov operator maps the values of one boundary condition of the solution of an elliptic partial differential equation in a domain to the values of another boundary condition. Usually, either of the boundary conditions determines the solution. Thus, a Poincaré–Steklov operator encapsulates the boundary response of the system modelled by the partial differential equation. When the partial differential equation is discretized, for example by finite elements or finite differences, the discretization of the Poincaré–Steklov operator is the Schur complement obtained by eliminating all degrees of freedom inside the domain.

Olof B. Widlund, born 1938, is a Swedish-American mathematician. He is well known for his leading role in and fundamental contributions to domain decomposition methods. He received his Ph.D. at Uppsala University in 1966 and is professor of Computer Science at the Courant Institute of New York University.

In numerical analysis, multi-time-step integration, also referred to as multiple-step or asynchronous time integration, is a numerical time-integration method that uses different time-steps or time-integrators for different parts of the problem. There are different approaches to multi-time-step integration. They are based on domain decompostition and can be classified into strong (monolithic) or weak (staggered) schemes. Using different time-steps or time-integrators in the context of a weak algorithm is rather straightforward, because the numerical solvers operate independently. However, this is not the case in a strong algorithm. In the past few years a number of research articles have addressed the development of strong multi-time-step algorithms. In either case, strong or weak, the numerical accuracy and stability needs to be carefully studied. Other approaches to multi-time-step integration in the context of operator splitting methods have also been developed; i.e., multi-rate GARK method and multi-step methods for molecular dynamics simulations.

References

  1. Dryja, Maksymilian; Widlund, Olof B. (1990), "Towards a unified theory of domain decomposition algorithms for elliptic problems", in Chan, Tony; Glowinski, Roland; Périaux, Jacques; Widlund, Olof B., Third International Symposium on Domain Decomposition Methods for Partial Differential Equations (Houston, Texas, March 20–22, 1989) (PDF), Philadelphia, PA: SIAM, pp. 3–21.