Galerkin method

Last updated

In mathematics, in the area of numerical analysis, Galerkin methods are named after the Soviet mathematician Boris Galerkin. They convert a continuous operator problem, such as a differential equation, commonly in a weak formulation, to a discrete problem by applying linear constraints determined by finite sets of basis functions.

Contents

Often when referring to a Galerkin method, one also gives the name along with typical assumptions and approximation methods used:

Examples of Galerkin methods are:

Example: Matrix linear system

We first introduce and illustrate the Galerkin method as being applied to a system of linear equations with the following symmetric and positive definite matrix

and the solution and right-hand-side vectors

Let us take

then the matrix of the Galerkin equation[ clarification needed ] is

the right-hand-side vector of the Galerkin equation is

so that we obtain the solution vector

to the Galerkin equation , which we finally uplift[ clarification needed ] to determine the approximate solution to the original equation as

In this example, our original Hilbert space is actually the 3-dimensional Euclidean space equipped with the standard scalar product , our 3-by-3 matrix defines the bilinear form , and the right-hand-side vector defines the bounded linear functional . The columns

of the matrix form an orthonormal basis of the 2-dimensional subspace of the Galerkin projection. The entries of the 2-by-2 Galerkin matrix are , while the components of the right-hand-side vector of the Galerkin equation are . Finally, the approximate solution is obtained from the components of the solution vector of the Galerkin equation and the basis as .

Linear equation in a Hilbert space

Weak formulation of a linear equation

Let us introduce Galerkin's method with an abstract problem posed as a weak formulation on a Hilbert space , namely,

find such that for all .

Here, is a bilinear form (the exact requirements on will be specified later) and is a bounded linear functional on .

Galerkin dimension reduction

Choose a subspace of dimension n and solve the projected problem:

Find such that for all .

We call this the Galerkin equation. Notice that the equation has remained unchanged and only the spaces have changed. Reducing the problem to a finite-dimensional vector subspace allows us to numerically compute as a finite linear combination of the basis vectors in .

Galerkin orthogonality

The key property of the Galerkin approach is that the error is orthogonal to the chosen subspaces. Since , we can use as a test vector in the original equation. Subtracting the two, we get the Galerkin orthogonality relation for the error, which is the error between the solution of the original problem, , and the solution of the Galerkin equation,

Matrix form of Galerkin's equation

Since the aim of Galerkin's method is the production of a linear system of equations, we build its matrix form, which can be used to compute the solution algorithmically.

Let be a basis for . Then, it is sufficient to use these in turn for testing the Galerkin equation, i.e.: find such that

We expand with respect to this basis, and insert it into the equation above, to obtain

This previous equation is actually a linear system of equations , where

Symmetry of the matrix

Due to the definition of the matrix entries, the matrix of the Galerkin equation is symmetric if and only if the bilinear form is symmetric.

Analysis of Galerkin methods

Here, we will restrict ourselves to symmetric bilinear forms, that is

While this is not really a restriction of Galerkin methods, the application of the standard theory becomes much simpler. Furthermore, a Petrov–Galerkin method may be required in the nonsymmetric case.

The analysis of these methods proceeds in two steps. First, we will show that the Galerkin equation is a well-posed problem in the sense of Hadamard and therefore admits a unique solution. In the second step, we study the quality of approximation of the Galerkin solution .

The analysis will mostly rest on two properties of the bilinear form, namely

By the Lax-Milgram theorem (see weak formulation), these two conditions imply well-posedness of the original problem in weak formulation. All norms in the following sections will be norms for which the above inequalities hold (these norms are often called an energy norm).

Well-posedness of the Galerkin equation

Since , boundedness and ellipticity of the bilinear form apply to . Therefore, the well-posedness of the Galerkin problem is actually inherited from the well-posedness of the original problem.

Quasi-best approximation (Céa's lemma)

The error between the original and the Galerkin solution admits the estimate

This means, that up to the constant , the Galerkin solution is as close to the original solution as any other vector in . In particular, it will be sufficient to study approximation by spaces , completely forgetting about the equation being solved.

Proof

Since the proof is very simple and the basic principle behind all Galerkin methods, we include it here: by ellipticity and boundedness of the bilinear form (inequalities) and Galerkin orthogonality (equals sign in the middle), we have for arbitrary :

Dividing by and taking the infimum over all possible yields the lemma.

Galerkin's best approximation property in the energy norm

For simplicity of presentation in the section above we have assumed that the bilinear form is symmetric and positive definite, which implies that it is a scalar product and the expression is actually a valid vector norm, called the energy norm. Under these assumptions one can easily prove in addition Galerkin's best approximation property in the energy norm.

Using Galerkin a-orthogonality and the Cauchy–Schwarz inequality for the energy norm, we obtain

Dividing by and taking the infimum over all possible proves that the Galerkin approximation is the best approximation in the energy norm within the subspace , i.e. is nothing but the orthogonal, with respect to the scalar product , projection of the solution to the subspace .

Galerkin method for stepped Structures

I. Elishakof, M. Amato, A. Marzani, P.A. Arvan, and J.N. Reddy [6] [7] [8] [9] studied the application of the Galerkin method to stepped structures. They showed that the generalized function, namely unit-step function, Dirac’s delta function, and the doublet function are needed for obtaining accurate results.

History

The approach is usually credited to Boris Galerkin. [10] [11] The method was explained to the Western reader by Hencky [12] and Duncan [13] [14] among others. Its convergence was studied by Mikhlin [15] and Leipholz [16] [17] [18] [19] Its coincidence with Fourier method was illustrated by Elishakoff et al. [20] [21] [22] Its equivalence to Ritz's method for conservative problems was shown by Singer. [23] Gander and Wanner [24] showed how Ritz and Galerkin methods led to the modern finite element method. One hundred years of method's development was discussed by Repin. [25] Elishakoff, Kaplunov and Kaplunov [26] show that the Galerkin’s method was not developed by Ritz, contrary to the Timoshenko’s statements.

See also

Related Research Articles

In mathematics, and more specifically in linear algebra, a linear map is a mapping between two vector spaces that preserves the operations of vector addition and scalar multiplication. The same names and the same definition are also used for the more general case of modules over a ring; see Module homomorphism.

<span class="mw-page-title-main">Linear algebra</span> Branch of mathematics

Linear algebra is the branch of mathematics concerning linear equations such as:

<span class="mw-page-title-main">Linear subspace</span> In mathematics, vector subspace

In mathematics, and more specifically in linear algebra, a linear subspace or vector subspace is a vector space that is a subset of some larger vector space. A linear subspace is usually simply called a subspace when the context serves to distinguish it from other types of subspaces.

<span class="mw-page-title-main">Gram–Schmidt process</span> Orthonormalization of a set of vectors

In mathematics, particularly linear algebra and numerical analysis, the Gram–Schmidt process or Gram-Schmidt algorithm is a way of making two or more vectors perpendicular to each other.

In linear algebra, an orthogonal matrix, or orthonormal matrix, is a real square matrix whose columns and rows are orthonormal vectors.

<span class="mw-page-title-main">Singular value decomposition</span> Matrix decomposition

In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix into a rotation, followed by a rescaling followed by another rotation. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any matrix. It is related to the polar decomposition.

<span class="mw-page-title-main">Orthogonal group</span> Type of group in mathematics

In mathematics, the orthogonal group in dimension n, denoted O(n), is the group of distance-preserving transformations of a Euclidean space of dimension n that preserve a fixed point, where the group operation is given by composing transformations. The orthogonal group is sometimes called the general orthogonal group, by analogy with the general linear group. Equivalently, it is the group of n × n orthogonal matrices, where the group operation is given by matrix multiplication (an orthogonal matrix is a real matrix whose inverse equals its transpose). The orthogonal group is an algebraic group and a Lie group. It is compact.

<span class="mw-page-title-main">Covariance and contravariance of vectors</span> Vector behavior under coordinate changes

In physics, especially in multilinear algebra and tensor analysis, covariance and contravariance describe how the quantitative description of certain geometric or physical entities changes with a change of basis. Briefly, a contravariant vector is a list of numbers that transforms oppositely to a change of basis, and a covariant vector is a list of numbers that transforms in the same way. Contravariant vectors are often just called vectors and covariant vectors are called covectors or dual vectors. The terms covariant and contravariant were introduced by James Joseph Sylvester in 1851.

In mathematics, a quadratic form is a polynomial with terms all of degree two. For example,

In mathematics, a bilinear form is a bilinear map V × VK on a vector space V over a field K. In other words, a bilinear form is a function B : V × VK that is linear in each argument separately:

<span class="mw-page-title-main">Projection (linear algebra)</span> Idempotent linear transformation from a vector space to itself

In linear algebra and functional analysis, a projection is a linear transformation from a vector space to itself such that . That is, whenever is applied twice to any vector, it gives the same result as if it were applied once. It leaves its image unchanged. This definition of "projection" formalizes and generalizes the idea of graphical projection. One can also consider the effect of a projection on a geometrical object by examining the effect of the projection on points in the object.

In mathematics, the kernel of a linear map, also known as the null space or nullspace, is the linear subspace of the domain of the map which is mapped to the zero vector. That is, given a linear map L : VW between two vector spaces V and W, the kernel of L is the vector space of all elements v of V such that L(v) = 0, where 0 denotes the zero vector in W, or more symbolically:

<span class="mw-page-title-main">Conjugate gradient method</span> Mathematical optimization algorithm

In mathematics, the conjugate gradient method is an algorithm for the numerical solution of particular systems of linear equations, namely those whose matrix is positive-definite. The conjugate gradient method is often implemented as an iterative algorithm, applicable to sparse systems that are too large to be handled by a direct implementation or other direct methods such as the Cholesky decomposition. Large sparse systems often arise when numerically solving partial differential equations or optimization problems.

The Rayleigh–Ritz method is a direct numerical method of approximating eigenvalues, originated in the context of solving physical boundary value problems and named after Lord Rayleigh and Walther Ritz.

In geometry and linear algebra, a principal axis is a certain line in a Euclidean space associated with an ellipsoid or hyperboloid, generalizing the major and minor axes of an ellipse or hyperbola. The principal axis theorem states that the principal axes are perpendicular, and gives a constructive procedure for finding them.

In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the spectral theorem.

<span class="mw-page-title-main">Matrix (mathematics)</span> Array of numbers

In mathematics, a matrix is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns, which is used to represent a mathematical object or a property of such an object.

Numerical methods for linear least squares entails the numerical analysis of linear least squares problems.

The Petrov–Galerkin method is a mathematical method used to approximate solutions of partial differential equations which contain terms with odd order and where the test function and solution function belong to different function spaces. It can be viewed as an extension of Bubnov-Galerkin method where the bases of test functions and solution functions are the same. In an operator formulation of the differential equation, Petrov–Galerkin method can be viewed as applying a projection that is not necessarily orthogonal, in contrast to Bubnov-Galerkin method.

The proper generalized decomposition (PGD) is an iterative numerical method for solving boundary value problems (BVPs), that is, partial differential equations constrained by a set of boundary conditions, such as the Poisson's equation or the Laplace's equation.

References

  1. A. Ern, J.L. Guermond, Theory and practice of finite elements, Springer, 2004, ISBN   0-387-20574-8
  2. "Georgii Ivanovich Petrov (on his 100th birthday)", Fluid Dynamics, May 2012, Volume 47, Issue 3, pp 289-291, DOI 10.1134/S0015462812030015
  3. S. Brenner, R. L. Scott, The Mathematical Theory of Finite Element Methods, 2nd edition, Springer, 2005, ISBN   0-387-95451-1
  4. P. G. Ciarlet, The Finite Element Method for Elliptic Problems, North-Holland, 1978, ISBN   0-444-85028-7
  5. Y. Saad, Iterative Methods for Sparse Linear Systems, 2nd edition, SIAM, 2003, ISBN   0-89871-534-2
  6. Elishakoff, I., Amato, M., Ankitha, A. P., & Marzani, A. (2021). Rigorous implementation of the Galerkin method for stepped structures needs generalized functions. Journal of Sound and Vibration, 490, 115708.
  7. Elishakoff, I., Amato, M., & Marzani, A. (2021). Galerkin’s method revisited and corrected in the problem of Jaworsky and Dowell. Mechanical Systems and Signal Processing, 155, 107604.
  8. Elishakoff, I., & Amato, M. (2021). Flutter of a beam in supersonic flow: truncated version of Timoshenko–Ehrenfest equation is sufficient. International Journal of Mechanics and Materials in Design, 1-17.
  9. Amato, M., Elishakoff, I., & Reddy, J. N. (2021). Flutter of a Multicomponent Beam in a Supersonic Flow. AIAA Journal, 59(11), 4342-4353.
  10. Galerkin, B.G.,1915, Rods and Plates, Series Occurring in Various Questions Concerning the Elastic Equilibrium of Rods and Plates, Vestnik Inzhenerov i Tekhnikov, (Engineers and Technologists Bulletin), Vol. 19, 897-908 (in Russian),(English Translation: 63-18925, Clearinghouse Fed. Sci. Tech. Info.1963).
  11. "Le destin douloureux de Walther Ritz (1878-1909)", (Jean-Claude Pont, editor), Cahiers de Vallesia, 24, (2012), ISBN   978-2-9700636-5-0
  12. Hencky H.,1927, Eine wichtige Vereinfachung der Methode von Ritz zur angennäherten Behandlung von Variationproblemen, ZAMM: Zeitschrift für angewandte Mathematik und Mechanik, Vol. 7, 80-81 (in German).
  13. Duncan, W.J.,1937, Galerkin’s Method in Mechanics and Differential Equations, Aeronautical Research Committee Reports and Memoranda, No. 1798.
  14. Duncan, W.J., 1938, The Principles of the Galerkin Method, Aeronautical Research Report and Memoranda, No. 1894.
  15. S. G. Mikhlin, "Variational methods in Mathematical Physics", Pergamon Press, 1964
  16. Leipholz H.H.E., 1976, Use of Galerkin’s Method for Vibration Problems, Shock and Vibration Digest, Vol. 8, 3-18
  17. Leipholz H.H.E., 1967, Über die Wahl der Ansatzfunktionen bei der Durchführung des Verfahrens von Galerkin, Acta Mech., Vol. 3, 295-317 (in German).
  18. Leipholz H.H.E., 1967, Über die Befreiung der Anzatzfunktionen des Ritzschen und Galerkinschen Verfahrens von den Randbedingungen, Ing. Arch., Vol. 36, 251-261 (in German).
  19. Leipholz, H.H.E.,1976, Use of Galerkin’s Method for Vibration Problems, The Shock and Vibration Digest Vol. 8, 3-18, 1976.
  20. Elishakoff, I., Lee, L.H.N.,1986, On Equivalence of the Galerkin and Fourier Series Methods for One Class of Problems, Journal of Sound and Vibration, Vol. 109, 174-177.
  21. Elishakoff, I., Zingales, M., 2003, Coincidence of Bubnov-Galerkin and Exact Solution in an Applied Mechanics Problem, Journal of Applied Mechanics, Vol. 70, 777-779.
  22. Elishakoff, I., Zingales M., 2004, Convergence of Bubnov-Galerkin Method Exemplified, AIAA Journal, Vol. 42(9), 1931-1933.
  23. Singer J., 1962, On Equivalence of the Galerkin and Rayleigh-Ritz Methods, Journal of the Royal Aeronautical Society, Vol. 66, No. 621, p.592.
  24. Gander, M.J, Wanner, G., 2012, From Euler, Ritz, and Galerkin to Modern Computing, SIAM Review, Vol. 54(4), 627-666.
  25. ] Repin, S., 2017, One Hundred Years of the Galerkin Method, Computational Methods and Applied Mathematics, Vol. 17(3), 351-357.
  26. .Elishakoff, I., Julius Kaplunov, Elizabeth Kaplunov, 2020, “Galerkin’s method was not developed by Ritz, contrary to the Timoshenko’s statement”, in Nonlinear Dynamics of Discrete and Continuous Systems (A. Abramyan, I. Andrianov and V. Gaiko, eds.), pp. 63-82, Springer, Berlin.