Domain decomposition methods

Last updated
Domain decomposition methods Ddm original logo.png
Domain decomposition methods

In mathematics, numerical analysis, and numerical partial differential equations, domain decomposition methods solve a boundary value problem by splitting it into smaller boundary value problems on subdomains and iterating to coordinate the solution between adjacent subdomains. A coarse problem with one or few unknowns per subdomain is used to further coordinate the solution between the subdomains globally. The problems on the subdomains are independent, which makes domain decomposition methods suitable for parallel computing. Domain decomposition methods are typically used as preconditioners for Krylov space iterative methods, such as the conjugate gradient method, GMRES, and LOBPCG.

Contents

In overlapping domain decomposition methods, the subdomains overlap by more than the interface. Overlapping domain decomposition methods include the Schwarz alternating method and the additive Schwarz method. Many domain decomposition methods can be written and analyzed as a special case of the abstract additive Schwarz method.

In non-overlapping methods, the subdomains intersect only on their interface. In primal methods, such as Balancing domain decomposition and BDDC, the continuity of the solution across subdomain interface is enforced by representing the value of the solution on all neighboring subdomains by the same unknown. In dual methods, such as FETI, the continuity of the solution across the subdomain interface is enforced by Lagrange multipliers. The FETI-DP method is hybrid between a dual and a primal method.

Non-overlapping domain decomposition methods are also called iterative substructuring methods.

Mortar methods are discretization methods for partial differential equations, which use separate discretization on nonoverlapping subdomains. The meshes on the subdomains do not match on the interface, and the equality of the solution is enforced by Lagrange multipliers, judiciously chosen to preserve the accuracy of the solution. In the engineering practice in the finite element method, continuity of solutions between non-matching subdomains is implemented by multiple-point constraints.

Finite element simulations of moderate size models require solving linear systems with millions of unknowns. Several hours per time step is an average sequential run time, therefore, parallel computing is a necessity. Domain decomposition methods embody large potential for a parallelization of the finite element methods, and serve a basis for distributed, parallel computations.

Example 1: 1D Linear BVP



The exact solution is:

Subdivide the domain into two subdomains, one from and another from . In the left subdomain define the interpolating function and in the right define . At the interface between these two subdomains the following interface conditions shall be imposed:


Let the interpolating functions be defined as:




Where is the nth cardinal function of the chebyshev polynomials of the first kind with input argument y.
If N=4 then the following approximation is obtained by this scheme:







This was obtained with the following MATLAB code.

clearallN=4;a1=0;b1=1/2;[TD1D2E1E2xxsub]=cheb(N,a1,b1);% the diff matrices on [0,1/2] are the same%as those on [1/2 1].I=eye(N+1);H=D2-I;H1=[[1zeros(1,N)];H(2:end-1,:);[zeros(1,N)1]];H1=[H1[zeros(N,N+1);-[1zeros(1,N)]]];H2=[D1(1,:);H(2:end-1,:);[zeros(1,N)1]];H2=[[-D1(N+1,:);zeros(N,N+1)]H2];K=[H1;H2];F=[zeros(2*N+1,1);1];u=K\F;xx=-cos(pi*(0:N)'/N);x1=1/4*(xx+1);x2=1/4*(xx+3);x=[x1;x2];uex=(exp(x)-exp(-x))./(exp(1)-exp(-1));

See also

Related Research Articles

<span class="mw-page-title-main">Normal distribution</span> Probability distribution

In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is

<span class="mw-page-title-main">Natural logarithm</span> Logarithm to the base of the mathematical constant e

The natural logarithm of a number is its logarithm to the base of the mathematical constant e, which is an irrational and transcendental number approximately equal to 2.718281828459. The natural logarithm of x is generally written as ln x, logex, or sometimes, if the base e is implicit, simply log x. Parentheses are sometimes added for clarity, giving ln(x), loge(x), or log(x). This is done particularly when the argument to the logarithm is not a single symbol, so as to prevent ambiguity.

<span class="mw-page-title-main">Gaussian beam</span> Monochrome light beam whose amplitude envelope is a Gaussian function

In optics, a Gaussian beam is a beam of electromagnetic radiation with high monochromaticity whose amplitude envelope in the transverse plane is given by a Gaussian function; this also implies a Gaussian intensity (irradiance) profile. This fundamental (or TEM00) transverse Gaussian mode describes the intended output of most (but not all) lasers, as such a beam can be focused into the most concentrated spot. When such a beam is refocused by a lens, the transverse phase dependence is altered; this results in a different Gaussian beam. The electric and magnetic field amplitude profiles along any such circular Gaussian beam (for a given wavelength and polarization) are determined by a single parameter: the so-called waist w0. At any position z relative to the waist (focus) along a beam having a specified w0, the field amplitudes and phases are thereby determined as detailed below.

<span class="mw-page-title-main">Partial differential equation</span> Type of differential equation

In mathematics, a partial differential equation (PDE) is an equation which computes a function between various partial derivatives of a multivariable function.

<span class="mw-page-title-main">Heat equation</span> Partial differential equation describing the evolution of temperature in a region

In mathematics and physics, the heat equation is a certain partial differential equation. Solutions of the heat equation are sometimes known as caloric functions. The theory of the heat equation was first developed by Joseph Fourier in 1822 for the purpose of modeling how a quantity such as heat diffuses through a given region.

<span class="mw-page-title-main">Separation of variables</span> Technique for solving differential equations

In mathematics, separation of variables is any of several methods for solving ordinary and partial differential equations, in which algebra allows one to rewrite an equation so that each of two variables occurs on a different side of the equation.

In quantum mechanics, a particle in a spherically symmetric potential, is a quantum system with a potential that depends only on the distance between the particle and a defined center point. One example of a spherically symmetric potential is the electron within a hydrogen atom. The electron's potential only depends on its distance from the proton in the atom's nucleus. This potential can be derived from Coulomb's law.

In mathematics and its applications, classical Sturm–Liouville theory is the theory of real second-order linear ordinary differential equations of the form:

In mathematics, the matrix exponential is a matrix function on square matrices analogous to the ordinary exponential function. It is used to solve systems of linear differential equations. In the theory of Lie groups, the matrix exponential gives the exponential map between a matrix Lie algebra and the corresponding Lie group.

In mathematical physics, the WKB approximation or WKB method is a method for finding approximate solutions to linear differential equations with spatially varying coefficients. It is typically used for a semiclassical calculation in quantum mechanics in which the wavefunction is recast as an exponential function, semiclassically expanded, and then either the amplitude or the phase is taken to be changing slowly.

In mathematics, the polar decomposition of a square real or complex matrix is a factorization of the form , where is an orthogonal matrix and is a positive semi-definite symmetric matrix, both square and of the same size.

An eikonal equation is a non-linear first-order partial differential equation that is encountered in problems of wave propagation.

<span class="mw-page-title-main">Contact mechanics</span> Study of the deformation of solids that touch each other

Contact mechanics is the study of the deformation of solids that touch each other at one or more points. A central distinction in contact mechanics is between stresses acting perpendicular to the contacting bodies' surfaces and frictional stresses acting tangentially between the surfaces. Normal contact mechanics or frictionless contact mechanics focuses on normal stresses caused by applied normal forces and by the adhesion present on surfaces in close contact, even if they are clean and dry. Frictional contact mechanics emphasizes the effect of friction forces.

<span class="mw-page-title-main">Gaussian network model</span>

The Gaussian network model (GNM) is a representation of a biological macromolecule as an elastic mass-and-spring network to study, understand, and characterize the mechanical aspects of its long-time large-scale dynamics. The model has a wide range of applications from small proteins such as enzymes composed of a single domain, to large macromolecular assemblies such as a ribosome or a viral capsid. Protein domain dynamics plays key roles in a multitude of molecular recognition and cell signalling processes. Protein domains, connected by intrinsically disordered flexible linker domains, induce long-range allostery via protein domain dynamics. The resultant dynamic modes cannot be generally predicted from static structures of either the entire protein or individual domains.

<span class="mw-page-title-main">Spinodal decomposition</span> Mechanism of spontaneous phase separation

Spinodal decomposition is a mechanism by which a single thermodynamic phase spontaneously separates into two phases. Decomposition occurs when there is no thermodynamic barrier to phase separation. As a result, phase separation via decomposition does not require the nucleation events resulting from thermodynamic fluctuations, which normally trigger phase separation.

In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the spectral theorem.

In mathematics, Neumann–Neumann methods are domain decomposition preconditioners named so because they solve a Neumann problem on each subdomain on both sides of the interface between the subdomains. Just like all domain decomposition methods, so that the number of iterations does not grow with the number of subdomains, Neumann–Neumann methods require the solution of a coarse problem to provide global communication. The balancing domain decomposition is a Neumann–Neumann method with a special kind of coarse problem.

In numerical analysis, the Schur complement method, named after Issai Schur, is the basic and the earliest version of non-overlapping domain decomposition method, also called iterative substructuring. A finite element problem is split into non-overlapping subdomains, and the unknowns in the interiors of the subdomains are eliminated. The remaining Schur complement system on the unknowns associated with subdomain interfaces is solved by the conjugate gradient method.

Proximal gradientmethods for learning is an area of research in optimization and statistical learning theory which studies algorithms for a general class of convex regularization problems where the regularization penalty may not be differentiable. One such example is regularization of the form

The spectrum of a chirp pulse describes its characteristics in terms of its frequency components. This frequency-domain representation is an alternative to the more familiar time-domain waveform, and the two versions are mathematically related by the Fourier transform.
The spectrum is of particular interest when pulses are subject to signal processing. For example, when a chirp pulse is compressed by its matched filter, the resulting waveform contains not only a main narrow pulse but, also, a variety of unwanted artifacts many of which are directly attributable to features in the chirp's spectral characteristics.
The simplest way to derive the spectrum of a chirp, now that computers are widely available, is to sample the time-domain waveform at a frequency well above the Nyquist limit and call up an FFT algorithm to obtain the desired result. As this approach was not an option for the early designers, they resorted to analytic analysis, where possible, or to graphical or approximation methods, otherwise. These early methods still remain helpful, however, as they give additional insight into the behavior and properties of chirps.