In the finite element method for the numerical solution of elliptic partial differential equations, the stiffness matrix is a matrix that represents the system of linear equations that must be solved in order to ascertain an approximate solution to the differential equation.
For simplicity, we will first consider the Poisson problem
on some domain Ω, subject to the boundary condition u = 0 on the boundary of Ω. To discretize this equation by the finite element method, one chooses a set of basis functions {φ1, …, φn} defined on Ω which also vanish on the boundary. One then approximates
The coefficients u1, u2, …, un are determined so that the error in the approximation is orthogonal to each basis function φi:
as a consequence of the homogenous Dirichlet boundary conditions. The stiffness matrix is the n-element square matrix A defined by
By defining the vector F with components the coefficients ui are determined by the linear system Au = F. The stiffness matrix is symmetric, i.e. Aij = Aji, so all its eigenvalues are real. Moreover, it is a strictly positive-definite matrix, so that the system Au = F always has a unique solution. (For other problems, these nice properties will be lost.)
Note that the stiffness matrix will be different depending on the computational grid used for the domain and what type of finite element is used. For example, the stiffness matrix when piecewise quadratic finite elements are used will have more degrees of freedom than piecewise linear elements.
Determining the stiffness matrix for other PDEs follows essentially the same procedure, but it can be complicated by the choice of boundary conditions. As a more complex example, consider the elliptic equation
where is a positive-definite matrix defined for each point x in the domain. We impose the Robin boundary condition
where νk is the component of the unit outward normal vector ν in the k-th direction. The system to be solved is
as can be shown using an analogue of Green's identity. The coefficients ui are still found by solving a system of linear equations, but the matrix representing the system is markedly different from that for the ordinary Poisson problem.
In general, to each scalar elliptic operator L of order 2k, there is associated a bilinear form B on the Sobolev space Hk, so that the weak formulation of the equation Lu = f is
for all functions v in Hk. Then the stiffness matrix for this problem is
In order to implement the finite element method on a computer, one must first choose a set of basis functions and then compute the integrals defining the stiffness matrix. Usually, the domain Ω is discretized by some form of mesh generation, wherein it is divided into non-overlapping triangles or quadrilaterals, which are generally referred to as elements. The basis functions are then chosen to be polynomials of some order within each element, and continuous across element boundaries. The simplest choices are piecewise linear for triangular elements and piecewise bilinear for rectangular elements.
The element stiffness matrixA[k] for element Tk is the matrix
The element stiffness matrix is zero for most values of i and j, for which the corresponding basis functions are zero within Tk. The full stiffness matrix A is the sum of the element stiffness matrices. In particular, for basis functions that are only supported locally, the stiffness matrix is sparse.
For many standard choices of basis functions, i.e. piecewise linear basis functions on triangles, there are simple formulas for the element stiffness matrices. For example, for piecewise linear elements, consider a triangle with vertices (x1, y1), (x2, y2), (x3, y3), and define the 2×3 matrix
Then the element stiffness matrix is
When the differential equation is more complicated, say by having an inhomogeneous diffusion coefficient, the integral defining the element stiffness matrix can be evaluated by Gaussian quadrature.
The condition number of the stiffness matrix depends strongly on the quality of the numerical grid. In particular, triangles with small angles in the finite element mesh induce large eigenvalues of the stiffness matrix, degrading the solution quality.
In vector calculus and differential geometry the generalized Stokes theorem, also called the Stokes–Cartan theorem, is a statement about the integration of differential forms on manifolds, which both simplifies and generalizes several theorems from vector calculus. In particular, the fundamental theorem of calculus is the special case where the manifold is a line segment, Green’s theorem and Stokes' theorem are the cases of a surface in or and the divergence theorem is the case of a volume in Hence, the theorem is sometimes referred to as the Fundamental Theorem of Multivariate Calculus.
The wave equation is a second-order linear partial differential equation for the description of waves or standing wave fields such as mechanical waves or electromagnetic waves. It arises in fields like acoustics, electromagnetism, and fluid dynamics.
The Navier–Stokes equations are partial differential equations which describe the motion of viscous fluid substances. They were named after French engineer and physicist Claude-Louis Navier and the Irish physicist and mathematician George Gabriel Stokes. They were developed over several decades of progressively building the theories, from 1822 (Navier) to 1842–1850 (Stokes).
In vector calculus, the divergence theorem, also known as Gauss's theorem or Ostrogradsky's theorem, is a theorem relating the flux of a vector field through a closed surface to the divergence of the field in the volume enclosed.
In mathematics, an eigenfunction of a linear operator D defined on some function space is any non-zero function in that space that, when acted upon by D, is only multiplied by some scaling factor called an eigenvalue. As an equation, this condition can be written as
In vector calculus, Green's theorem relates a line integral around a simple closed curve C to a double integral over the plane region D bounded by C. It is the two-dimensional special case of Stokes' theorem.
In mathematics, the Hodge star operator or Hodge star is a linear map defined on the exterior algebra of a finite-dimensional oriented vector space endowed with a nondegenerate symmetric bilinear form. Applying the operator to an element of the algebra produces the Hodge dual of the element. This map was introduced by W. V. D. Hodge.
The path integral formulation is a description in quantum mechanics that generalizes the stationary action principle of classical mechanics. It replaces the classical notion of a single, unique classical trajectory for a system with a sum, or functional integral, over an infinity of quantum-mechanically possible trajectories to compute a quantum amplitude.
In vector calculus, a conservative vector field is a vector field that is the gradient of some function. A conservative vector field has the property that its line integral is path independent; the choice of path between two points does not change the value of the line integral. Path independence of the line integral is equivalent to the vector field under the line integral being conservative. A conservative vector field is also irrotational; in three dimensions, this means that it has vanishing curl. An irrotational vector field is necessarily conservative provided that the domain is simply connected.
Geometrical optics, or ray optics, is a model of optics that describes light propagation in terms of rays. The ray in geometrical optics is an abstraction useful for approximating the paths along which light propagates under certain circumstances.
In quantum field theory, the Lehmann–Symanzik–Zimmermann (LSZ) reduction formula is a method to calculate S-matrix elements from the time-ordered correlation functions of a quantum field theory. It is a step of the path that starts from the Lagrangian of some quantum field theory and leads to prediction of measurable quantities. It is named after the three German physicists Harry Lehmann, Kurt Symanzik and Wolfhart Zimmermann.
In calculus, the Leibniz integral rule for differentiation under the integral sign states that for an integral of the form
In differential calculus, the Reynolds transport theorem, or simply the Reynolds theorem, named after Osborne Reynolds (1842–1912), is a three-dimensional generalization of the Leibniz integral rule. It is used to recast time derivatives of integrated quantities and is useful in formulating the basic equations of continuum mechanics.
The Navier–Stokes existence and smoothness problem concerns the mathematical properties of solutions to the Navier–Stokes equations, a system of partial differential equations that describe the motion of a fluid in space. Solutions to the Navier–Stokes equations are used in many practical applications. However, theoretical understanding of the solutions to these equations is incomplete. In particular, solutions of the Navier–Stokes equations often include turbulence, which remains one of the greatest unsolved problems in physics, despite its immense importance in science and engineering.
In mathematics, an elliptic boundary value problem is a special kind of boundary value problem which can be thought of as the stable state of an evolution problem. For example, the Dirichlet problem for the Laplacian gives the eventual distribution of heat in a room several hours after the heating is turned on.
The derivation of the Navier–Stokes equations as well as its application and formulation for different families of fluids, is an important exercise in fluid dynamics with applications in mechanical engineering, physics, chemistry, heat transfer, and electrical engineering. A proof explaining the properties and bounds of the equations, such as Navier–Stokes existence and smoothness, is one of the important unsolved problems in mathematics.
In applied mathematics, discontinuous Galerkin methods (DG methods) form a class of numerical methods for solving differential equations. They combine features of the finite element and the finite volume framework and have been successfully applied to hyperbolic, elliptic, parabolic and mixed form problems arising from a wide range of applications. DG methods have in particular received considerable interest for problems with a dominant first-order part, e.g. in electrodynamics, fluid mechanics and plasma physics.
The finite element method (FEM) is a popular method for numerically solving differential equations arising in engineering and mathematical modeling. Typical problem areas of interest include the traditional fields of structural analysis, heat transfer, fluid flow, mass transport, and electromagnetic potential.
In mathematics, the Neumann–Poincaré operator or Poincaré–Neumann operator, named after Carl Neumann and Henri Poincaré, is a non-self-adjoint compact operator introduced by Poincaré to solve boundary value problems for the Laplacian on bounded domains in Euclidean space. Within the language of potential theory it reduces the partial differential equation to an integral equation on the boundary to which the theory of Fredholm operators can be applied. The theory is particularly simple in two dimensions—the case treated in detail in this article—where it is related to complex function theory, the conjugate Beurling transform or complex Hilbert transform and the Fredholm eigenvalues of bounded planar domains.
In numerical mathematics, the gradient discretisation method (GDM) is a framework which contains classical and recent numerical schemes for diffusion problems of various kinds: linear or non-linear, steady-state or time-dependent. The schemes may be conforming or non-conforming, and may rely on very general polygonal or polyhedral meshes.