Differential equations |
---|
Scope |
Classification |
Solution |
People |
In mathematics, in the area of numerical analysis, Galerkin methods are named after the Soviet mathematician Boris Galerkin. They convert a continuous operator problem, such as a differential equation, commonly in a weak formulation, to a discrete problem by applying linear constraints determined by finite sets of basis functions.
Often when referring to a Galerkin method, one also gives the name along with typical assumptions and approximation methods used:
Examples of Galerkin methods are:
We first introduce and illustrate the Galerkin method as being applied to a system of linear equations with the following symmetric and positive definite matrix
and the solution and right-hand-side vectors
Let us take
then the matrix of the Galerkin equation[ clarification needed ] is
the right-hand-side vector of the Galerkin equation is
so that we obtain the solution vector
to the Galerkin equation , which we finally uplift[ clarification needed ] to determine the approximate solution to the original equation as
In this example, our original Hilbert space is actually the 3-dimensional Euclidean space equipped with the standard scalar product , our 3-by-3 matrix defines the bilinear form , and the right-hand-side vector defines the bounded linear functional . The columns
of the matrix form an orthonormal basis of the 2-dimensional subspace of the Galerkin projection. The entries of the 2-by-2 Galerkin matrix are , while the components of the right-hand-side vector of the Galerkin equation are . Finally, the approximate solution is obtained from the components of the solution vector of the Galerkin equation and the basis as .
Let us introduce Galerkin's method with an abstract problem posed as a weak formulation on a Hilbert space , namely,
Here, is a bilinear form (the exact requirements on will be specified later) and is a bounded linear functional on .
Choose a subspace of dimension n and solve the projected problem:
We call this the Galerkin equation. Notice that the equation has remained unchanged and only the spaces have changed. Reducing the problem to a finite-dimensional vector subspace allows us to numerically compute as a finite linear combination of the basis vectors in .
The key property of the Galerkin approach is that the error is orthogonal to the chosen subspaces. Since , we can use as a test vector in the original equation. Subtracting the two, we get the Galerkin orthogonality relation for the error, which is the error between the solution of the original problem, , and the solution of the Galerkin equation,
Since the aim of Galerkin's method is the production of a linear system of equations, we build its matrix form, which can be used to compute the solution algorithmically.
Let be a basis for . Then, it is sufficient to use these in turn for testing the Galerkin equation, i.e.: find such that
We expand with respect to this basis, and insert it into the equation above, to obtain
This previous equation is actually a linear system of equations , where
Due to the definition of the matrix entries, the matrix of the Galerkin equation is symmetric if and only if the bilinear form is symmetric.
Here, we will restrict ourselves to symmetric bilinear forms, that is
While this is not really a restriction of Galerkin methods, the application of the standard theory becomes much simpler. Furthermore, a Petrov–Galerkin method may be required in the nonsymmetric case.
The analysis of these methods proceeds in two steps. First, we will show that the Galerkin equation is a well-posed problem in the sense of Hadamard and therefore admits a unique solution. In the second step, we study the quality of approximation of the Galerkin solution .
The analysis will mostly rest on two properties of the bilinear form, namely
By the Lax-Milgram theorem (see weak formulation), these two conditions imply well-posedness of the original problem in weak formulation. All norms in the following sections will be norms for which the above inequalities hold (these norms are often called an energy norm).
Since , boundedness and ellipticity of the bilinear form apply to . Therefore, the well-posedness of the Galerkin problem is actually inherited from the well-posedness of the original problem.
The error between the original and the Galerkin solution admits the estimate
This means, that up to the constant , the Galerkin solution is as close to the original solution as any other vector in . In particular, it will be sufficient to study approximation by spaces , completely forgetting about the equation being solved.
Since the proof is very simple and the basic principle behind all Galerkin methods, we include it here: by ellipticity and boundedness of the bilinear form (inequalities) and Galerkin orthogonality (equals sign in the middle), we have for arbitrary :
Dividing by and taking the infimum over all possible yields the lemma.
For simplicity of presentation in the section above we have assumed that the bilinear form is symmetric and positive definite, which implies that it is a scalar product and the expression is actually a valid vector norm, called the energy norm. Under these assumptions one can easily prove in addition Galerkin's best approximation property in the energy norm.
Using Galerkin a-orthogonality and the Cauchy–Schwarz inequality for the energy norm, we obtain
Dividing by and taking the infimum over all possible proves that the Galerkin approximation is the best approximation in the energy norm within the subspace , i.e. is nothing but the orthogonal, with respect to the scalar product , projection of the solution to the subspace .
I. Elishakof, M. Amato, A. Marzani, P.A. Arvan, and J.N. Reddy [6] [7] [8] [9] studied the application of the Galerkin method to stepped structures. They showed that the generalized function, namely unit-step function, Dirac’s delta function, and the doublet function are needed for obtaining accurate results.
The approach is usually credited to Boris Galerkin. [10] [11] The method was explained to the Western reader by Hencky [12] and Duncan [13] [14] among others. Its convergence was studied by Mikhlin [15] and Leipholz [16] [17] [18] [19] Its coincidence with Fourier method was illustrated by Elishakoff et al. [20] [21] [22] Its equivalence to Ritz's method for conservative problems was shown by Singer. [23] Gander and Wanner [24] showed how Ritz and Galerkin methods led to the modern finite element method. One hundred years of method's development was discussed by Repin. [25] Elishakoff, Kaplunov and Kaplunov [26] show that the Galerkin’s method was not developed by Ritz, contrary to the Timoshenko’s statements.
In mathematics, and more specifically in linear algebra, a linear map is a mapping between two vector spaces that preserves the operations of vector addition and scalar multiplication. The same names and the same definition are also used for the more general case of modules over a ring; see Module homomorphism.
Linear algebra is the branch of mathematics concerning linear equations such as:
In mathematics, and more specifically in linear algebra, a linear subspace or vector subspace is a vector space that is a subset of some larger vector space. A linear subspace is usually simply called a subspace when the context serves to distinguish it from other types of subspaces.
In mathematics, particularly linear algebra and numerical analysis, the Gram–Schmidt process or Gram-Schmidt algorithm is a way of making two or more vectors perpendicular to each other.
In linear algebra, an orthogonal matrix, or orthonormal matrix, is a real square matrix whose columns and rows are orthonormal vectors.
In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix into a rotation, followed by a rescaling followed by another rotation. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any matrix. It is related to the polar decomposition.
In mathematics, the orthogonal group in dimension n, denoted O(n), is the group of distance-preserving transformations of a Euclidean space of dimension n that preserve a fixed point, where the group operation is given by composing transformations. The orthogonal group is sometimes called the general orthogonal group, by analogy with the general linear group. Equivalently, it is the group of n × n orthogonal matrices, where the group operation is given by matrix multiplication (an orthogonal matrix is a real matrix whose inverse equals its transpose). The orthogonal group is an algebraic group and a Lie group. It is compact.
In physics, especially in multilinear algebra and tensor analysis, covariance and contravariance describe how the quantitative description of certain geometric or physical entities changes with a change of basis. Briefly, a contravariant vector is a list of numbers that transforms oppositely to a change of basis, and a covariant vector is a list of numbers that transforms in the same way. Contravariant vectors are often just called vectors and covariant vectors are called covectors or dual vectors. The terms covariant and contravariant were introduced by James Joseph Sylvester in 1851.
In mathematics, a quadratic form is a polynomial with terms all of degree two. For example,
In mathematics, a bilinear form is a bilinear map V × V → K on a vector space V over a field K. In other words, a bilinear form is a function B : V × V → K that is linear in each argument separately:
In linear algebra and functional analysis, a projection is a linear transformation from a vector space to itself such that . That is, whenever is applied twice to any vector, it gives the same result as if it were applied once. It leaves its image unchanged. This definition of "projection" formalizes and generalizes the idea of graphical projection. One can also consider the effect of a projection on a geometrical object by examining the effect of the projection on points in the object.
In mathematics, the kernel of a linear map, also known as the null space or nullspace, is the linear subspace of the domain of the map which is mapped to the zero vector. That is, given a linear map L : V → W between two vector spaces V and W, the kernel of L is the vector space of all elements v of V such that L(v) = 0, where 0 denotes the zero vector in W, or more symbolically:
In mathematics, the conjugate gradient method is an algorithm for the numerical solution of particular systems of linear equations, namely those whose matrix is positive-definite. The conjugate gradient method is often implemented as an iterative algorithm, applicable to sparse systems that are too large to be handled by a direct implementation or other direct methods such as the Cholesky decomposition. Large sparse systems often arise when numerically solving partial differential equations or optimization problems.
The Rayleigh–Ritz method is a direct numerical method of approximating eigenvalues, originated in the context of solving physical boundary value problems and named after Lord Rayleigh and Walther Ritz.
In geometry and linear algebra, a principal axis is a certain line in a Euclidean space associated with an ellipsoid or hyperboloid, generalizing the major and minor axes of an ellipse or hyperbola. The principal axis theorem states that the principal axes are perpendicular, and gives a constructive procedure for finding them.
In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the spectral theorem.
In mathematics, a matrix is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns, which is used to represent a mathematical object or a property of such an object.
Numerical methods for linear least squares entails the numerical analysis of linear least squares problems.
The Petrov–Galerkin method is a mathematical method used to approximate solutions of partial differential equations which contain terms with odd order and where the test function and solution function belong to different function spaces. It can be viewed as an extension of Bubnov-Galerkin method where the bases of test functions and solution functions are the same. In an operator formulation of the differential equation, Petrov–Galerkin method can be viewed as applying a projection that is not necessarily orthogonal, in contrast to Bubnov-Galerkin method.
The proper generalized decomposition (PGD) is an iterative numerical method for solving boundary value problems (BVPs), that is, partial differential equations constrained by a set of boundary conditions, such as the Poisson's equation or the Laplace's equation.