Differential equations |
---|
Scope |
Classification |
Solution |
People |
In mathematics, in the area of numerical analysis, Galerkin methods are a family of methods for converting a continuous operator problem, such as a differential equation, commonly in a weak formulation, to a discrete problem by applying linear constraints determined by finite sets of basis functions. They are named after the Soviet mathematician Boris Galerkin.
Often when referring to a Galerkin method, one also gives the name along with typical assumptions and approximation methods used:
Examples of Galerkin methods are:
We first introduce and illustrate the Galerkin method as being applied to a system of linear equations . We define the parameters as follow:
which is symmetric and positive definite, and the right-hand-side
The true solution to this linear system is
With Galerkin method, we can solve the system in a lower-dimensional space to obtain an approximate solution. Let us use the following basis for the subspace:
Then, we can write the Galerkin equation where the left-hand-side matrix is
and the right-hand-side vector is
We can then obtain the solution vector in the subspace:
which we finally project back to the original space to determine the approximate solution to the original equation as
In this example, our original Hilbert space is actually the 3-dimensional Euclidean space equipped with the standard scalar product , our 3-by-3 matrix defines the bilinear form , and the right-hand-side vector defines the bounded linear functional . The columns
of the matrix form an orthonormal basis of the 2-dimensional subspace of the Galerkin projection. The entries of the 2-by-2 Galerkin matrix are , while the components of the right-hand-side vector of the Galerkin equation are . Finally, the approximate solution is obtained from the components of the solution vector of the Galerkin equation and the basis as .
Let us introduce Galerkin's method with an abstract problem posed as a weak formulation on a Hilbert space , namely,
Here, is a bilinear form (the exact requirements on will be specified later) and is a bounded linear functional on .
Choose a subspace of dimension n and solve the projected problem:
We call this the Galerkin equation. Notice that the equation has remained unchanged and only the spaces have changed. Reducing the problem to a finite-dimensional vector subspace allows us to numerically compute as a finite linear combination of the basis vectors in .
The key property of the Galerkin approach is that the error is orthogonal to the chosen subspaces. Since , we can use as a test vector in the original equation. Subtracting the two, we get the Galerkin orthogonality relation for the error, which is the error between the solution of the original problem, , and the solution of the Galerkin equation,
Since the aim of Galerkin's method is the production of a linear system of equations, we build its matrix form, which can be used to compute the solution algorithmically.
Let be a basis for . Then, it is sufficient to use these in turn for testing the Galerkin equation, i.e.: find such that
We expand with respect to this basis, and insert it into the equation above, to obtain
This previous equation is actually a linear system of equations , where
Due to the definition of the matrix entries, the matrix of the Galerkin equation is symmetric if and only if the bilinear form is symmetric.
Here, we will restrict ourselves to symmetric bilinear forms, that is
While this is not really a restriction of Galerkin methods, the application of the standard theory becomes much simpler. Furthermore, a Petrov–Galerkin method may be required in the nonsymmetric case.
The analysis of these methods proceeds in two steps. First, we will show that the Galerkin equation is a well-posed problem in the sense of Hadamard and therefore admits a unique solution. In the second step, we study the quality of approximation of the Galerkin solution .
The analysis will mostly rest on two properties of the bilinear form, namely
By the Lax-Milgram theorem (see weak formulation), these two conditions imply well-posedness of the original problem in weak formulation. All norms in the following sections will be norms for which the above inequalities hold (these norms are often called an energy norm).
Since , boundedness and ellipticity of the bilinear form apply to . Therefore, the well-posedness of the Galerkin problem is actually inherited from the well-posedness of the original problem.
The error between the original and the Galerkin solution admits the estimate
This means, that up to the constant , the Galerkin solution is as close to the original solution as any other vector in . In particular, it will be sufficient to study approximation by spaces , completely forgetting about the equation being solved.
Since the proof is very simple and the basic principle behind all Galerkin methods, we include it here: by ellipticity and boundedness of the bilinear form (inequalities) and Galerkin orthogonality (equals sign in the middle), we have for arbitrary :
Dividing by and taking the infimum over all possible yields the lemma.
For simplicity of presentation in the section above we have assumed that the bilinear form is symmetric and positive definite, which implies that it is a scalar product and the expression is actually a valid vector norm, called the energy norm. Under these assumptions one can easily prove in addition Galerkin's best approximation property in the energy norm.
Using Galerkin a-orthogonality and the Cauchy–Schwarz inequality for the energy norm, we obtain
Dividing by and taking the infimum over all possible proves that the Galerkin approximation is the best approximation in the energy norm within the subspace , i.e. is nothing but the orthogonal, with respect to the scalar product , projection of the solution to the subspace .
I. Elishakof, M. Amato, A. Marzani, P.A. Arvan, and J.N. Reddy [6] [7] [8] [9] studied the application of the Galerkin method to stepped structures. They showed that the generalized function, namely unit-step function, Dirac’s delta function, and the doublet function are needed for obtaining accurate results.
The approach is usually credited to Boris Galerkin. [10] [11] The method was explained to the Western reader by Hencky [12] and Duncan [13] [14] among others. Its convergence was studied by Mikhlin [15] and Leipholz [16] [17] [18] [19] Its coincidence with Fourier method was illustrated by Elishakoff et al. [20] [21] [22] Its equivalence to Ritz's method for conservative problems was shown by Singer. [23] Gander and Wanner [24] showed how Ritz and Galerkin methods led to the modern finite element method. One hundred years of method's development was discussed by Repin. [25] Elishakoff, Kaplunov and Kaplunov [26] show that the Galerkin’s method was not developed by Ritz, contrary to the Timoshenko’s statements.
Linear algebra is the branch of mathematics concerning linear equations such as:
In mathematics, and more specifically in linear algebra, a linear subspace or vector subspace is a vector space that is a subset of some larger vector space. A linear subspace is usually simply called a subspace when the context serves to distinguish it from other types of subspaces.
In mathematics, particularly linear algebra and numerical analysis, the Gram–Schmidt process or Gram-Schmidt algorithm is a way of finding a set of two or more vectors that are perpendicular to each other.
In the theory of vector spaces, a set of vectors is said to be linearly independent if there exists no nontrivial linear combination of the vectors that equals the zero vector. If such a linear combination exists, then the vectors are said to be linearly dependent. These concepts are central to the definition of dimension.
In linear algebra, an orthogonal matrix, or orthonormal matrix, is a real square matrix whose columns and rows are orthonormal vectors.
In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix into a rotation, followed by a rescaling followed by another rotation. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any matrix. It is related to the polar decomposition.
In mathematics, the orthogonal group in dimension n, denoted O(n), is the group of distance-preserving transformations of a Euclidean space of dimension n that preserve a fixed point, where the group operation is given by composing transformations. The orthogonal group is sometimes called the general orthogonal group, by analogy with the general linear group. Equivalently, it is the group of n × n orthogonal matrices, where the group operation is given by matrix multiplication (an orthogonal matrix is a real matrix whose inverse equals its transpose). The orthogonal group is an algebraic group and a Lie group. It is compact.
In the mathematical fields of linear algebra and functional analysis, the orthogonal complement of a subspace of a vector space equipped with a bilinear form is the set of all vectors in that are orthogonal to every vector in . Informally, it is called the perp, short for perpendicular complement. It is a subspace of .
In mathematics, the kernel of a linear map, also known as the null space or nullspace, is the part of the domain which the map maps to the zero vector. That is, given a linear map L : V → W between two vector spaces V and W, the kernel of L is the vector space of all elements v of V such that L(v) = 0, where 0 denotes the zero vector in W, or more symbolically:
In numerical linear algebra, the Arnoldi iteration is an eigenvalue algorithm and an important example of an iterative method. Arnoldi finds an approximation to the eigenvalues and eigenvectors of general matrices by constructing an orthonormal basis of the Krylov subspace, which makes it particularly useful when dealing with large sparse matrices.
In mathematics, the conjugate gradient method is an algorithm for the numerical solution of particular systems of linear equations, namely those whose matrix is positive-semidefinite. The conjugate gradient method is often implemented as an iterative algorithm, applicable to sparse systems that are too large to be handled by a direct implementation or other direct methods such as the Cholesky decomposition. Large sparse systems often arise when numerically solving partial differential equations or optimization problems.
In mathematics, a symmetric bilinear form on a vector space is a bilinear map from two copies of the vector space to the field of scalars such that the order of the two vectors does not affect the value of the map. In other words, it is a bilinear function that maps every pair of elements of the vector space to the underlying field such that for every and in . They are also referred to more briefly as just symmetric forms when "bilinear" is understood.
The Rayleigh–Ritz method is a direct numerical method of approximating eigenvalues, originated in the context of solving physical boundary value problems and named after Lord Rayleigh and Walther Ritz.
In linear algebra, an eigenvector or characteristic vector is a vector that has its direction unchanged by a given linear transformation. More precisely, an eigenvector of a linear transformation is scaled by a constant factor when the linear transformation is applied to it: . It is often important to know these vectors in linear algebra. The corresponding eigenvalue, characteristic value, or characteristic root is the multiplying factor .
In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the spectral theorem.
Numerical methods for linear least squares entails the numerical analysis of linear least squares problems.
The Petrov–Galerkin method is a mathematical method used to approximate solutions of partial differential equations which contain terms with odd order and where the test function and solution function belong to different function spaces. It can be viewed as an extension of Bubnov-Galerkin method where the bases of test functions and solution functions are the same. In an operator formulation of the differential equation, Petrov–Galerkin method can be viewed as applying a projection that is not necessarily orthogonal, in contrast to Bubnov-Galerkin method.
The proper generalized decomposition (PGD) is an iterative numerical method for solving boundary value problems (BVPs), that is, partial differential equations constrained by a set of boundary conditions, such as the Poisson's equation or the Laplace's equation.
The streamline upwind Petrov–Galerkin pressure-stabilizing Petrov–Galerkin formulation for incompressible Navier–Stokes equations can be used for finite element computations of high Reynolds number incompressible flow using equal order of finite element space by introducing additional stabilization terms in the Navier–Stokes Galerkin formulation.
In mathematics, orthogonality is the generalization of the geometric notion of perpendicularity to the linear algebra of bilinear forms.