Galerkin method

Last updated

In mathematics, in the area of numerical analysis, Galerkin methods are a family of methods for converting a continuous operator problem, such as a differential equation, commonly in a weak formulation, to a discrete problem by applying linear constraints determined by finite sets of basis functions. They are named after the Soviet mathematician Boris Galerkin.

Contents

Often when referring to a Galerkin method, one also gives the name along with typical assumptions and approximation methods used:

Examples of Galerkin methods are:

Example: Matrix linear system

We first introduce and illustrate the Galerkin method as being applied to a system of linear equations . We define the parameters as follow:

which is symmetric and positive definite, and the right-hand-side

The true solution to this linear system is

With Galerkin method, we can solve the system in a lower-dimensional space to obtain an approximate solution. Let us use the following basis for the subspace:

Then, we can write the Galerkin equation where the left-hand-side matrix is

and the right-hand-side vector is

We can then obtain the solution vector in the subspace:

which we finally project back to the original space to determine the approximate solution to the original equation as

In this example, our original Hilbert space is actually the 3-dimensional Euclidean space equipped with the standard scalar product , our 3-by-3 matrix defines the bilinear form , and the right-hand-side vector defines the bounded linear functional . The columns

of the matrix form an orthonormal basis of the 2-dimensional subspace of the Galerkin projection. The entries of the 2-by-2 Galerkin matrix are , while the components of the right-hand-side vector of the Galerkin equation are . Finally, the approximate solution is obtained from the components of the solution vector of the Galerkin equation and the basis as .

Linear equation in a Hilbert space

Weak formulation of a linear equation

Let us introduce Galerkin's method with an abstract problem posed as a weak formulation on a Hilbert space , namely,

find such that for all .

Here, is a bilinear form (the exact requirements on will be specified later) and is a bounded linear functional on .

Galerkin dimension reduction

Choose a subspace of dimension n and solve the projected problem:

Find such that for all .

We call this the Galerkin equation. Notice that the equation has remained unchanged and only the spaces have changed. Reducing the problem to a finite-dimensional vector subspace allows us to numerically compute as a finite linear combination of the basis vectors in .

Galerkin orthogonality

The key property of the Galerkin approach is that the error is orthogonal to the chosen subspaces. Since , we can use as a test vector in the original equation. Subtracting the two, we get the Galerkin orthogonality relation for the error, which is the error between the solution of the original problem, , and the solution of the Galerkin equation,

Matrix form of Galerkin's equation

Since the aim of Galerkin's method is the production of a linear system of equations, we build its matrix form, which can be used to compute the solution algorithmically.

Let be a basis for . Then, it is sufficient to use these in turn for testing the Galerkin equation, i.e.: find such that

We expand with respect to this basis, and insert it into the equation above, to obtain

This previous equation is actually a linear system of equations , where

Symmetry of the matrix

Due to the definition of the matrix entries, the matrix of the Galerkin equation is symmetric if and only if the bilinear form is symmetric.

Analysis of Galerkin methods

Here, we will restrict ourselves to symmetric bilinear forms, that is

While this is not really a restriction of Galerkin methods, the application of the standard theory becomes much simpler. Furthermore, a Petrov–Galerkin method may be required in the nonsymmetric case.

The analysis of these methods proceeds in two steps. First, we will show that the Galerkin equation is a well-posed problem in the sense of Hadamard and therefore admits a unique solution. In the second step, we study the quality of approximation of the Galerkin solution .

The analysis will mostly rest on two properties of the bilinear form, namely

By the Lax-Milgram theorem (see weak formulation), these two conditions imply well-posedness of the original problem in weak formulation. All norms in the following sections will be norms for which the above inequalities hold (these norms are often called an energy norm).

Well-posedness of the Galerkin equation

Since , boundedness and ellipticity of the bilinear form apply to . Therefore, the well-posedness of the Galerkin problem is actually inherited from the well-posedness of the original problem.

Quasi-best approximation (Céa's lemma)

The error between the original and the Galerkin solution admits the estimate

This means, that up to the constant , the Galerkin solution is as close to the original solution as any other vector in . In particular, it will be sufficient to study approximation by spaces , completely forgetting about the equation being solved.

Proof

Since the proof is very simple and the basic principle behind all Galerkin methods, we include it here: by ellipticity and boundedness of the bilinear form (inequalities) and Galerkin orthogonality (equals sign in the middle), we have for arbitrary :

Dividing by and taking the infimum over all possible yields the lemma.

Galerkin's best approximation property in the energy norm

For simplicity of presentation in the section above we have assumed that the bilinear form is symmetric and positive definite, which implies that it is a scalar product and the expression is actually a valid vector norm, called the energy norm. Under these assumptions one can easily prove in addition Galerkin's best approximation property in the energy norm.

Using Galerkin a-orthogonality and the Cauchy–Schwarz inequality for the energy norm, we obtain

Dividing by and taking the infimum over all possible proves that the Galerkin approximation is the best approximation in the energy norm within the subspace , i.e. is nothing but the orthogonal, with respect to the scalar product , projection of the solution to the subspace .

Galerkin method for stepped Structures

I. Elishakof, M. Amato, A. Marzani, P.A. Arvan, and J.N. Reddy [6] [7] [8] [9] studied the application of the Galerkin method to stepped structures. They showed that the generalized function, namely unit-step function, Dirac’s delta function, and the doublet function are needed for obtaining accurate results.

History

The approach is usually credited to Boris Galerkin. [10] [11] The method was explained to the Western reader by Hencky [12] and Duncan [13] [14] among others. Its convergence was studied by Mikhlin [15] and Leipholz [16] [17] [18] [19] Its coincidence with Fourier method was illustrated by Elishakoff et al. [20] [21] [22] Its equivalence to Ritz's method for conservative problems was shown by Singer. [23] Gander and Wanner [24] showed how Ritz and Galerkin methods led to the modern finite element method. One hundred years of method's development was discussed by Repin. [25] Elishakoff, Kaplunov and Kaplunov [26] show that the Galerkin’s method was not developed by Ritz, contrary to the Timoshenko’s statements.

See also

Related Research Articles

<span class="mw-page-title-main">Linear algebra</span> Branch of mathematics

Linear algebra is the branch of mathematics concerning linear equations such as:

<span class="mw-page-title-main">Linear subspace</span> In mathematics, vector subspace

In mathematics, and more specifically in linear algebra, a linear subspace or vector subspace is a vector space that is a subset of some larger vector space. A linear subspace is usually simply called a subspace when the context serves to distinguish it from other types of subspaces.

<span class="mw-page-title-main">Gram–Schmidt process</span> Orthonormalization of a set of vectors

In mathematics, particularly linear algebra and numerical analysis, the Gram–Schmidt process or Gram-Schmidt algorithm is a way of finding a set of two or more vectors that are perpendicular to each other.

<span class="mw-page-title-main">Linear independence</span> Vectors whose linear combinations are nonzero

In the theory of vector spaces, a set of vectors is said to be linearly independent if there exists no nontrivial linear combination of the vectors that equals the zero vector. If such a linear combination exists, then the vectors are said to be linearly dependent. These concepts are central to the definition of dimension.

In linear algebra, an orthogonal matrix, or orthonormal matrix, is a real square matrix whose columns and rows are orthonormal vectors.

<span class="mw-page-title-main">Singular value decomposition</span> Matrix decomposition

In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix into a rotation, followed by a rescaling followed by another rotation. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any matrix. It is related to the polar decomposition.

<span class="mw-page-title-main">Orthogonal group</span> Type of group in mathematics

In mathematics, the orthogonal group in dimension n, denoted O(n), is the group of distance-preserving transformations of a Euclidean space of dimension n that preserve a fixed point, where the group operation is given by composing transformations. The orthogonal group is sometimes called the general orthogonal group, by analogy with the general linear group. Equivalently, it is the group of n × n orthogonal matrices, where the group operation is given by matrix multiplication (an orthogonal matrix is a real matrix whose inverse equals its transpose). The orthogonal group is an algebraic group and a Lie group. It is compact.

In the mathematical fields of linear algebra and functional analysis, the orthogonal complement of a subspace of a vector space equipped with a bilinear form is the set of all vectors in that are orthogonal to every vector in . Informally, it is called the perp, short for perpendicular complement. It is a subspace of .

In mathematics, the kernel of a linear map, also known as the null space or nullspace, is the part of the domain which the map maps to the zero vector. That is, given a linear map L : VW between two vector spaces V and W, the kernel of L is the vector space of all elements v of V such that L(v) = 0, where 0 denotes the zero vector in W, or more symbolically:

In numerical linear algebra, the Arnoldi iteration is an eigenvalue algorithm and an important example of an iterative method. Arnoldi finds an approximation to the eigenvalues and eigenvectors of general matrices by constructing an orthonormal basis of the Krylov subspace, which makes it particularly useful when dealing with large sparse matrices.

<span class="mw-page-title-main">Conjugate gradient method</span> Mathematical optimization algorithm

In mathematics, the conjugate gradient method is an algorithm for the numerical solution of particular systems of linear equations, namely those whose matrix is positive-semidefinite. The conjugate gradient method is often implemented as an iterative algorithm, applicable to sparse systems that are too large to be handled by a direct implementation or other direct methods such as the Cholesky decomposition. Large sparse systems often arise when numerically solving partial differential equations or optimization problems.

In mathematics, a symmetric bilinear form on a vector space is a bilinear map from two copies of the vector space to the field of scalars such that the order of the two vectors does not affect the value of the map. In other words, it is a bilinear function that maps every pair of elements of the vector space to the underlying field such that for every and in . They are also referred to more briefly as just symmetric forms when "bilinear" is understood.

The Rayleigh–Ritz method is a direct numerical method of approximating eigenvalues, originated in the context of solving physical boundary value problems and named after Lord Rayleigh and Walther Ritz.

In linear algebra, an eigenvector or characteristic vector is a vector that has its direction unchanged by a given linear transformation. More precisely, an eigenvector of a linear transformation is scaled by a constant factor when the linear transformation is applied to it: . It is often important to know these vectors in linear algebra. The corresponding eigenvalue, characteristic value, or characteristic root is the multiplying factor .

In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the spectral theorem.

Numerical methods for linear least squares entails the numerical analysis of linear least squares problems.

The Petrov–Galerkin method is a mathematical method used to approximate solutions of partial differential equations which contain terms with odd order and where the test function and solution function belong to different function spaces. It can be viewed as an extension of Bubnov-Galerkin method where the bases of test functions and solution functions are the same. In an operator formulation of the differential equation, Petrov–Galerkin method can be viewed as applying a projection that is not necessarily orthogonal, in contrast to Bubnov-Galerkin method.

The proper generalized decomposition (PGD) is an iterative numerical method for solving boundary value problems (BVPs), that is, partial differential equations constrained by a set of boundary conditions, such as the Poisson's equation or the Laplace's equation.

The streamline upwind Petrov–Galerkin pressure-stabilizing Petrov–Galerkin formulation for incompressible Navier–Stokes equations can be used for finite element computations of high Reynolds number incompressible flow using equal order of finite element space by introducing additional stabilization terms in the Navier–Stokes Galerkin formulation.

<span class="mw-page-title-main">Orthogonality (mathematics)</span>

In mathematics, orthogonality is the generalization of the geometric notion of perpendicularity to the linear algebra of bilinear forms.

References

  1. A. Ern, J.L. Guermond, Theory and practice of finite elements, Springer, 2004, ISBN   0-387-20574-8
  2. "Georgii Ivanovich Petrov (on his 100th birthday)", Fluid Dynamics, May 2012, Volume 47, Issue 3, pp 289-291, DOI 10.1134/S0015462812030015
  3. S. Brenner, R. L. Scott, The Mathematical Theory of Finite Element Methods, 2nd edition, Springer, 2005, ISBN   0-387-95451-1
  4. P. G. Ciarlet, The Finite Element Method for Elliptic Problems, North-Holland, 1978, ISBN   0-444-85028-7
  5. Y. Saad, Iterative Methods for Sparse Linear Systems, 2nd edition, SIAM, 2003, ISBN   0-89871-534-2
  6. Elishakoff, I., Amato, M., Ankitha, A. P., & Marzani, A. (2021). Rigorous implementation of the Galerkin method for stepped structures needs generalized functions. Journal of Sound and Vibration, 490, 115708.
  7. Elishakoff, I., Amato, M., & Marzani, A. (2021). Galerkin’s method revisited and corrected in the problem of Jaworsky and Dowell. Mechanical Systems and Signal Processing, 155, 107604.
  8. Elishakoff, I., & Amato, M. (2021). Flutter of a beam in supersonic flow: truncated version of Timoshenko–Ehrenfest equation is sufficient. International Journal of Mechanics and Materials in Design, 1-17.
  9. Amato, M., Elishakoff, I., & Reddy, J. N. (2021). Flutter of a Multicomponent Beam in a Supersonic Flow. AIAA Journal, 59(11), 4342-4353.
  10. Galerkin, B.G.,1915, Rods and Plates, Series Occurring in Various Questions Concerning the Elastic Equilibrium of Rods and Plates, Vestnik Inzhenerov i Tekhnikov, (Engineers and Technologists Bulletin), Vol. 19, 897-908 (in Russian),(English Translation: 63-18925, Clearinghouse Fed. Sci. Tech. Info.1963).
  11. "Le destin douloureux de Walther Ritz (1878-1909)", (Jean-Claude Pont, editor), Cahiers de Vallesia, 24, (2012), ISBN   978-2-9700636-5-0
  12. Hencky H.,1927, Eine wichtige Vereinfachung der Methode von Ritz zur angennäherten Behandlung von Variationproblemen, ZAMM: Zeitschrift für angewandte Mathematik und Mechanik, Vol. 7, 80-81 (in German).
  13. Duncan, W.J.,1937, Galerkin’s Method in Mechanics and Differential Equations, Aeronautical Research Committee Reports and Memoranda, No. 1798.
  14. Duncan, W.J., 1938, The Principles of the Galerkin Method, Aeronautical Research Report and Memoranda, No. 1894.
  15. S. G. Mikhlin, "Variational methods in Mathematical Physics", Pergamon Press, 1964
  16. Leipholz H.H.E., 1976, Use of Galerkin’s Method for Vibration Problems, Shock and Vibration Digest, Vol. 8, 3-18
  17. Leipholz H.H.E., 1967, Über die Wahl der Ansatzfunktionen bei der Durchführung des Verfahrens von Galerkin, Acta Mech., Vol. 3, 295-317 (in German).
  18. Leipholz H.H.E., 1967, Über die Befreiung der Anzatzfunktionen des Ritzschen und Galerkinschen Verfahrens von den Randbedingungen, Ing. Arch., Vol. 36, 251-261 (in German).
  19. Leipholz, H.H.E.,1976, Use of Galerkin’s Method for Vibration Problems, The Shock and Vibration Digest Vol. 8, 3-18, 1976.
  20. Elishakoff, I., Lee, L.H.N.,1986, On Equivalence of the Galerkin and Fourier Series Methods for One Class of Problems, Journal of Sound and Vibration, Vol. 109, 174-177.
  21. Elishakoff, I., Zingales, M., 2003, Coincidence of Bubnov-Galerkin and Exact Solution in an Applied Mechanics Problem, Journal of Applied Mechanics, Vol. 70, 777-779.
  22. Elishakoff, I., Zingales M., 2004, Convergence of Bubnov-Galerkin Method Exemplified, AIAA Journal, Vol. 42(9), 1931-1933.
  23. Singer J., 1962, On Equivalence of the Galerkin and Rayleigh-Ritz Methods, Journal of the Royal Aeronautical Society, Vol. 66, No. 621, p.592.
  24. Gander, M.J, Wanner, G., 2012, From Euler, Ritz, and Galerkin to Modern Computing, SIAM Review, Vol. 54(4), 627-666.
  25. ] Repin, S., 2017, One Hundred Years of the Galerkin Method, Computational Methods and Applied Mathematics, Vol. 17(3), 351-357.
  26. .Elishakoff, I., Julius Kaplunov, Elizabeth Kaplunov, 2020, “Galerkin’s method was not developed by Ritz, contrary to the Timoshenko’s statement”, in Nonlinear Dynamics of Discrete and Continuous Systems (A. Abramyan, I. Andrianov and V. Gaiko, eds.), pp. 63-82, Springer, Berlin.