Polyharmonic spline

Last updated

In applied mathematics, polyharmonic splines are used for function approximation and data interpolation. They are very useful for interpolating and fitting scattered data in many dimensions. Special cases include thin plate splines [1] [2] and natural cubic splines in one dimension. [3]

Contents

Definition

A polyharmonic spline is a linear combination of polyharmonic radial basis functions (RBFs) denoted by plus a polynomial term:

 

 

 

 

(1)

where

Polyharmonic basis functions Polyharmonic-splines-basic-functions.png
Polyharmonic basis functions

The polynomial with the coefficients improves fitting accuracy for polyharmonic smoothing splines and also improves extrapolation away from the centers See figure below for comparison of splines with polynomial term and without polynomial term.

The polyharmonic RBFs are of the form:

Other values of the exponent are not useful (such as ), because a solution of the interpolation problem might not exist. To avoid problems at (since ), the polyharmonic RBFs with the natural logarithm might be implemented as:

or, more simply adding a continuity extension in

The weights and are determined such that the function interpolates given points (for ) and fulfills the orthogonality conditions

All together, these constraints are equivalent to the symmetric linear system of equations

 

 

 

 

(2)

where

In order for this system of equations to have a unique solution, must be full rank. is full rank for very mild conditions on the input data. For example, in two dimensions, three centers forming a non-degenerate triangle ensure that is full rank, and in three dimensions, four centers forming a non-degenerate tetrahedron ensure that B is full rank. As explained later, the linear transformation resulting from the restriction of the domain of the linear transformation to the null space of is positive definite. This means that if is full rank, the system of equations ( 2 ) always has a unique solution and it can be solved using a linear solver specialised for symmetric matrices. The computed weights allow evaluation of the spline for any using equation ( 1 ). Many practical details of implementing and using polyharmonic splines are explained in Fasshauer. [4] In Iske [5] polyharmonic splines are treated as special cases of other multiresolution methods in scattered data modelling.

Discussion

The main advantage of polyharmonic spline interpolation is that usually very good interpolation results are obtained for scattered data without performing any "tuning", so automatic interpolation is feasible. This is not the case for other radial basis functions. For example, the Gaussian function needs to be tuned, so that is selected according to the underlying grid of the independent variables. If this grid is non-uniform, a proper selection of to achieve a good interpolation result is difficult or impossible.

Main disadvantages are:

Fast construction and evaluation methods


One straightforward approach to speeding up model construction and evaluation is to use a subset of nearest interpolation nodes to build a local model every time we evaluate the spline. As a result, the total time needed for model construction and evaluation at points changes from to . This can yield better timings if is much less than . Such an approach is advocated by some software libraries, the most notable being scipy.interpolate.RBFInterpolator. The main drawback is that it introduces small discontinuities in the spline and requires problem-specific tuning: a proper choice of the neighbors count, . Recently, methods have been developed to overcome the aforementioned difficulties without sacrificing main advantages of polyharmonic splines.

First, a bunch of methods for fast evaluation were proposed:

Second, an accelerated model construction by applying an iterative solver to an ACBF-preconditioned linear system was proposed by Brown et al. [8] This approach reduces running time from to , and further to when combined with accelerated evaluation techniques.

The approaches above are often employed by commercial geospatial data analysis libraries and by some open source implementations (e.g. ALGLIB). Sometimes domain decomposition methods are used to improve asymptotic behavior, reducing memory requirements from to , thus making polyharmonic splines suitable for datasets with more than 1.000.000 points.

Reason for the name "polyharmonic"

A polyharmonic equation is a partial differential equation of the form for any natural number , where is the Laplace operator. For example, the biharmonic equation is and the triharmonic equation is . All the polyharmonic radial basis functions are solutions of a polyharmonic equation (or more accurately, a modified polyharmonic equation with a Dirac delta function on the right hand side instead of 0). For example, the thin plate radial basis function is a solution of the modified 2-dimensional biharmonic equation. [9] Applying the 2D Laplace operator () to the thin plate radial basis function either by hand or using a computer algebra system shows that . Applying the Laplace operator to (this is ) yields 0. But 0 is not exactly correct. To see this, replace with (where is some small number tending to 0). The Laplace operator applied to yields . For the right hand side of this equation approaches infinity as approaches 0. For any other , the right hand side approaches 0 as approaches 0. This indicates that the right hand side is a Dirac delta function. A computer algebra system will show that

So the thin plate radial basis function is a solution of the equation .

Applying the 3D Laplacian () to the biharmonic RBF yields and applying the 3D operator to the triharmonic RBF yields . Letting and computing again indicates that the right hand side of the PDEs for the biharmonic and triharmonic RBFs are Dirac delta functions. Since

the exact PDEs satisfied by the biharmonic and triharmonic RBFs are and .

Polyharmonic smoothing splines

Polyharmonic splines minimize

 

 

 

 

(3)

where is some box in containing a neighborhood of all the centers, is some positive constant, and is the vector of all th order partial derivatives of For example, in 2D and and in 3D . In 2D making the integral the simplified thin plate energy functional.

To show that polyharmonic splines minimize equation ( 3 ), the fitting term must be transformed into an integral using the definition of the Dirac delta function:

So equation ( 3 ) can be written as the functional

where is a multi-index that ranges over all partial derivatives of order for In order to apply the Euler–Lagrange equation for a single function of multiple variables and higher order derivatives, the quantities

and

are needed. Inserting these quantities into the E−L equation shows that

 

 

 

 

(4)

A weak solution of ( 4 ) satisfies

 

 

 

 

(5)

for all smooth test functions that vanish outside of A weak solution of equation ( 4 ) will still minimize ( 3 ) while getting rid of the delta function through integration. [10]

Let be a polyharmonic spline as defined by equation ( 1 ). The following calculations will show that satisfies ( 5 ). Applying the operator to equation ( 1 ) yields

where and So ( 5 ) is equivalent to

 

 

 

 

(6)

The only possible solution to ( 6 ) for all test functions is

 

 

 

 

(7)

(which implies interpolation if ). Combining the definition of in equation ( 1 ) with equation ( 7 ) results in almost the same linear system as equation ( 2 ) except that the matrix is replaced with where is the identity matrix. For example, for the 3D triharmonic RBFs, is replaced with

Explanation of additional constraints

In ( 2 ), the bottom half of the system of equations () is given without explanation. The explanation first requires deriving a simplified form of when is all of

First, require that This ensures that all derivatives of order and higher of vanish at infinity. For example, let and and be the triharmonic RBF. Then (considering as a mapping from to ). For a given center

On a line for arbitrary point and unit vector

Dividing both numerator and denominator of this by shows that a quantity independent of the center So on the given line,

It is not quite enough to require that because in what follows it is necessary for to vanish at infinity, where and are multi-indices such that For triharmonic (where and are the weights and centers of ) is always a sum of total degree 5 polynomials in and divided by the square root of a total degree 8 polynomial. Consider the behavior of these terms on the line as approaches infinity. The numerator is a degree 5 polynomial in Dividing numerator and denominator by leaves the degree 4 and 5 terms in the numerator and a function of only in the denominator. A degree 5 term divided by is a product of five coordinates and The (and ) constraint makes this vanish everywhere on the line. A degree 4 term divided by is either a product of four coordinates and an coordinate or a product of four coordinates and a single or coordinate. The constraint makes the first type of term vanish everywhere on the line. The additional constraints will make the second type of term vanish.

Now define the inner product of two functions defined as a linear combination of polyharmonic RBFs with and as

Integration by parts shows that

 

 

 

 

(8)

For example, let and Then

 

 

 

 

(9)

Integrating the first term of this by parts once yields

since vanishes at infinity. Integrating by parts again results in

So integrating by parts twice for each term of ( 9 ) yields

Since ( 8 ) shows that

So if and

 

 

 

 

(10)

Now the origin of the constraints can be explained. Here is a generalization of the defined above to possibly include monomials up to degree In other words,

where is a column vector of all degree monomials of the coordinates of The top half of ( 2 ) is equivalent to So to obtain a smoothing spline, one should minimize the scalar field defined by

The equations

and

(where denotes row of ) are equivalent to the two systems of linear equations and Since is invertible, the first system is equivalent to So the first system implies the second system is equivalent to Just as in the previous smoothing spline coefficient derivation, the top half of ( 2 ) becomes

This derivation of the polyharmonic smoothing spline equation system did not assume the constraints necessary to guarantee that But the constraints necessary to guarantee this, and are a subset of which is true for the critical point of So is true for the formed from the solution of the polyharmonic smoothing spline equation system. Because the integral is positive for all the linear transformation resulting from the restriction of the domain of linear transformation to such that must be positive definite. This fact enables transforming the polyharmonic smoothing spline equation system to a symmetric positive definite system of equations that can be solved twice as fast using the Cholesky decomposition. [9]

Examples

The next figure shows the interpolation through four points (marked by "circles") using different types of polyharmonic splines. The "curvature" of the interpolated curves grows with the order of the spline and the extrapolation at the left boundary (x < 0) is reasonable. The figure also includes the radial basis functions φ = exp(−r2) which gives a good interpolation as well. Finally, the figure includes also the non-polyharmonic spline phi = r2 to demonstrate, that this radial basis function is not able to pass through the predefined points (the linear equation has no solution and is solved in a least squares sense).

Interpolation with different polyharmonic splines that shall pass the 4 predefined points marked by a circle (the interpolation with phi = r is not useful, since the linear equation system of the interpolation problem has no solution; it is solved in a least squares sense, but then does not pass the centers) Polyharmonic-splines-example1.png
Interpolation with different polyharmonic splines that shall pass the 4 predefined points marked by a circle (the interpolation with phi = r is not useful, since the linear equation system of the interpolation problem has no solution; it is solved in a least squares sense, but then does not pass the centers)

The next figure shows the same interpolation as in the first figure, with the only exception that the points to be interpolated are scaled by a factor of 100 (and the case phi = r2 is no longer included). Since φ = (scale·r)k = (scalekrk, the factor (scalek) can be extracted from matrix A of the linear equation system and therefore the solution is not influenced by the scaling. This is different for the logarithmic form of the spline, although the scaling has not much influence. This analysis is reflected in the figure, where the interpolation shows not much differences. Note, for other radial basis functions, such as φ = exp(−kr2) with k = 1, the interpolation is no longer reasonable and it would be necessary to adapt k.

The same interpolation as in the first figure, but the points to be interpolated are scaled by 100 Polyharmonic-splines-example1-scale100.png
The same interpolation as in the first figure, but the points to be interpolated are scaled by 100

The next figure shows the same interpolation as in the first figure, with the only exception that the polynomial term of the function is not taken into account (and the case phi = r2 is no longer included). As can be seen from the figure, the extrapolation for x < 0 is no longer as "natural" as in the first figure for some of the basis functions. This indicates, that the polynomial term is useful if extrapolation occurs.

The same interpolation as in the first figure, but without the polynomial term Polyharmonic-splines-example1-no-polynomial.png
The same interpolation as in the first figure, but without the polynomial term

See also

Related Research Articles

<span class="mw-page-title-main">Laplace's equation</span> Second-order partial differential equation

In mathematics and physics, Laplace's equation is a second-order partial differential equation named after Pierre-Simon Laplace, who first studied its properties. This is often written as

<span class="mw-page-title-main">Dirac delta function</span> Generalized function whose value is zero everywhere except at zero

In mathematical analysis, the Dirac delta function, also known as the unit impulse, is a generalized function on the real numbers, whose value is zero everywhere except at zero, and whose integral over the entire real line is equal to one. Since there is no function having this property, to model the delta "function" rigorously involves the use of limits or, as is common in mathematics, measure theory and the theory of distributions.

In probability theory, the central limit theorem (CLT) states that, under appropriate conditions, the distribution of a normalized version of the sample mean converges to a standard normal distribution. This holds even if the original variables themselves are not normally distributed. There are several versions of the CLT, each applying in the context of different conditions.

<span class="mw-page-title-main">Fourier series</span> Decomposition of periodic functions into sums of simpler sinusoidal forms

A Fourier series is an expansion of a periodic function into a sum of trigonometric functions. The Fourier series is an example of a trigonometric series, but not all trigonometric series are Fourier series. By expressing a function as a sum of sines and cosines, many problems involving the function become easier to analyze because trigonometric functions are well understood. For example, Fourier series were first used by Joseph Fourier to find solutions to the heat equation. This application is possible because the derivatives of trigonometric functions fall into simple patterns. Fourier series cannot be used to approximate arbitrary functions, because most functions have infinitely many terms in their Fourier series, and the series do not always converge. Well-behaved functions, for example smooth functions, have Fourier series that converge to the original function. The coefficients of the Fourier series are determined by integrals of the function multiplied by trigonometric functions, described in Common forms of the Fourier series below.

<span class="mw-page-title-main">Heat equation</span> Partial differential equation describing the evolution of temperature in a region

In mathematics and physics, the heat equation is a certain partial differential equation. Solutions of the heat equation are sometimes known as caloric functions. The theory of the heat equation was first developed by Joseph Fourier in 1822 for the purpose of modeling how a quantity such as heat diffuses through a given region.

<span class="mw-page-title-main">Fokker–Planck equation</span> Partial differential equation

In statistical mechanics and information theory, the Fokker–Planck equation is a partial differential equation that describes the time evolution of the probability density function of the velocity of a particle under the influence of drag forces and random forces, as in Brownian motion. The equation can be generalized to other observables as well. The Fokker-Planck equation has multiple applications in information theory, graph theory, data science, finance, economics etc.

<span class="mw-page-title-main">Spherical harmonics</span> Special mathematical functions defined on the surface of a sphere

In mathematics and physical science, spherical harmonics are special functions defined on the surface of a sphere. They are often employed in solving partial differential equations in many scientific fields. A list of the spherical harmonics is available in Table of spherical harmonics.

In mathematics, the Hodge star operator or Hodge star is a linear map defined on the exterior algebra of a finite-dimensional oriented vector space endowed with a nondegenerate symmetric bilinear form. Applying the operator to an element of the algebra produces the Hodge dual of the element. This map was introduced by W. V. D. Hodge.

<span class="mw-page-title-main">Radon transform</span> Integral transform

In mathematics, the Radon transform is the integral transform which takes a function f defined on the plane to a function Rf defined on the (two-dimensional) space of lines in the plane, whose value at a particular line is equal to the line integral of the function over that line. The transform was introduced in 1917 by Johann Radon, who also provided a formula for the inverse transform. Radon further included formulas for the transform in three dimensions, in which the integral is taken over planes. It was later generalized to higher-dimensional Euclidean spaces and more broadly in the context of integral geometry. The complex analogue of the Radon transform is known as the Penrose transform. The Radon transform is widely applicable to tomography, the creation of an image from the projection data associated with cross-sectional scans of an object.

In mathematics, the Helmholtz equation is the eigenvalue problem for the Laplace operator. It corresponds to the linear partial differential equation:

<span class="mw-page-title-main">Linear time-invariant system</span> Mathematical model which is both linear and time-invariant

In system analysis, among other fields of study, a linear time-invariant (LTI) system is a system that produces an output signal from any input signal subject to the constraints of linearity and time-invariance; these terms are briefly defined below. These properties apply (exactly or approximately) to many important physical systems, in which case the response y(t) of the system to an arbitrary input x(t) can be found directly using convolution: y(t) = (xh)(t) where h(t) is called the system's impulse response and ∗ represents convolution (not to be confused with multiplication). What's more, there are systematic methods for solving any such system (determining h(t)), whereas systems not meeting both properties are generally more difficult (or impossible) to solve analytically. A good example of an LTI system is any electrical circuit consisting of resistors, capacitors, inductors and linear amplifiers.

In mathematics, a fundamental solution for a linear partial differential operator L is a formulation in the language of distribution theory of the older idea of a Green's function.

In mathematics a radial basis function (RBF) is a real-valued function whose value depends only on the distance between the input and some fixed point, either the origin, so that , or some other fixed point , called a center, so that . Any function that satisfies the property is a radial function. The distance is usually Euclidean distance, although other metrics are sometimes used. They are often used as a collection which forms a basis for some function space of interest, hence the name.

In condensed matter physics and crystallography, the static structure factor is a mathematical description of how a material scatters incident radiation. The structure factor is a critical tool in the interpretation of scattering patterns obtained in X-ray, electron and neutron diffraction experiments.

In physics, the Green's function for the Laplacian in three variables is used to describe the response of a particular type of physical system to a point source. In particular, this Green's function arises in systems that can be described by Poisson's equation, a partial differential equation (PDE) of the form

In mathematics, vector spherical harmonics (VSH) are an extension of the scalar spherical harmonics for use with vector fields. The components of the VSH are complex-valued functions expressed in the spherical coordinate basis vectors.

Static force fields are fields, such as a simple electric, magnetic or gravitational fields, that exist without excitations. The most common approximation method that physicists use for scattering calculations can be interpreted as static forces arising from the interactions between two bodies mediated by virtual particles, particles that exist for only a short time determined by the uncertainty principle. The virtual particles, also known as force carriers, are bosons, with different bosons associated with each force.

In mathematics, the Neumann–Poincaré operator or Poincaré–Neumann operator, named after Carl Neumann and Henri Poincaré, is a non-self-adjoint compact operator introduced by Poincaré to solve boundary value problems for the Laplacian on bounded domains in Euclidean space. Within the language of potential theory it reduces the partial differential equation to an integral equation on the boundary to which the theory of Fredholm operators can be applied. The theory is particularly simple in two dimensions—the case treated in detail in this article—where it is related to complex function theory, the conjugate Beurling transform or complex Hilbert transform and the Fredholm eigenvalues of bounded planar domains.

<span class="mw-page-title-main">Derivations of the Lorentz transformations</span>

There are many ways to derive the Lorentz transformations using a variety of physical principles, ranging from Maxwell's equations to Einstein's postulates of special relativity, and mathematical tools, spanning from elementary algebra and hyperbolic functions, to linear algebra and group theory.

In the science of fluid flow, Stokes' paradox is the phenomenon that there can be no creeping flow of a fluid around a disk in two dimensions; or, equivalently, the fact there is no non-trivial steady-state solution for the Stokes equations around an infinitely long cylinder. This is opposed to the 3-dimensional case, where Stokes' method provides a solution to the problem of flow around a sphere.

References

  1. R.L. Harder and R.N. Desmarais: Interpolation using surface splines. Journal of Aircraft, 1972, Issue 2, pp. 189−191
  2. J. Duchon: Splines minimizing rotation-invariant semi-norms in Sobolev spaces. Constructive Theory of Functions of Several Variables, W. Schempp and K. Zeller (eds), Springer, Berlin, pp. 85−100
  3. Wendland, Holger (2005). Scattered Data Approximation . Cambridge University Press. p.  9. ISBN   0521843359.
  4. G.F. Fasshauer G.F.: Meshfree Approximation Methods with MATLAB. World Scientific Publishing Company, 2007, ISPN-10: 9812706348
  5. A. Iske: Multiresolution Methods in Scattered Data Modelling, Lecture Notes in Computational Science and Engineering, 2004, Vol. 37, ISBN   3-540-20479-2, Springer-Verlag, Heidelberg.
  6. R.K. Beatson, M.J.D. Powell, and A.M. Tan: Fast evaluation of polyharmonic splines in three dimensions. IMA Journal of Numerical Analysis, 2007, 27, pp. 427–450.
  7. J. B. Cherrie; R. K. Beatson; D. L. Ragozin (2000), Fast evaluation of radial basis functions: methods for four-dimensional polyharmonic splines
  8. Damian Brown; Leevan Ling; Edward Kansa; Jeremy Levesley (2000), On Approximate Cardinal Preconditioning Methods for Solving PDEs with Radial Basis Functions
  9. 1 2 Powell, M. J. D. (1993). "Some algorithms for thin plate spline interpolation to functions of two variables" (PDF). Cambridge University Dept. Of Applied Mathematics and Theoretical Physics Technical Report. Archived from the original (PDF) on 2016-01-25. Retrieved January 7, 2016.
  10. Evans, Lawrence (1998). Partial Differential Equations . Providence: American Mathematical Society. pp.  450−452. ISBN   0-8218-0772-2.

Computer Code