Thin plate splines (TPS) are a spline-based technique for data interpolation and smoothing. "A spline is a function defined by polynomials in a piecewise manner." [1] [2] They were introduced to geometric design by Duchon. [3] They are an important special case of a polyharmonic spline. Robust Point Matching (RPM) is a common extension and shortly known as the TPS-RPM algorithm. [4]
The name thin plate spline refers to a physical analogy involving the bending of a plate or thin sheet of metal. Just as the metal has rigidity, the TPS fit resists bending also, implying a penalty involving the smoothness of the fitted surface. In the physical setting, the deflection is in the direction, orthogonal to the plane. In order to apply this idea to the problem of coordinate transformation, one interprets the lifting of the plate as a displacement of the or coordinates within the plane. In 2D cases, given a set of corresponding control points (knots), the TPS warp is described by parameters which include 6 global affine motion parameters and coefficients for correspondences of the control points. These parameters are computed by solving a linear system, in other words, TPS has a closed-form solution.
The TPS arises from consideration of the integral of the square of the second derivative—this forms its smoothness measure. In the case where is two dimensional, for interpolation, the TPS fits a mapping function between corresponding point-sets and that minimizes the following energy function:
The smoothing variant, correspondingly, uses a tuning parameter to control the rigidity of the deformation, balancing the aforementioned criterion with the measure of goodness of fit, thus minimizing: [1] [2]
For this variational problem, it can be shown that there exists a unique minimizer . [5] The finite element discretization of this variational problem, the method of elastic maps, is used for data mining and nonlinear dimensionality reduction. In simple words, "the first term is defined as the error measurement term and the second regularisation term is a penalty on the smoothness of ." [1] [2] It is in a general case needed to make the mapping unique.
The thin plate spline has a natural representation in terms of radial basis functions. Given a set of control points , a radial basis function defines a spatial mapping which maps any location in space to a new location , represented by
where denotes the usual Euclidean norm and is a set of mapping coefficients. The TPS corresponds to the radial basis kernel .
Suppose the points are in 2 dimensions (). One can use homogeneous coordinates for the point-set where a point is represented as a vector . The unique minimizer is parameterized by which consists of two matrices and ().
where d is a matrix representing the affine transformation (hence is a vector) and c is a warping coefficient matrix representing the non-affine deformation. The kernel function is a vector for each point , where each entry . Note that for TPS, the control points are chosen to be the same as the set of points to be warped , so we already use in the place of the control points.
If one substitutes the solution for , becomes:
where and are just concatenated versions of the point coordinates and , and is a matrix formed from the . Each row of each newly formed matrix comes from one of the original vectors. The matrix represents the TPS kernel. Loosely speaking, the TPS kernel contains the information about the point-set's internal structural relationships. When it is combined with the warping coefficients , a non-rigid warping is generated.
A nice property of the TPS is that it can always be decomposed into a global affine and a local non-affine component. Consequently, the TPS smoothness term is solely dependent on the non-affine components. This is a desirable property, especially when compared to other splines, since the global pose parameters included in the affine transformation are not penalized.
TPS has been widely used as the non-rigid transformation model in image alignment and shape matching. [6] An additional application is the analysis and comparisons of archaeological findings in 3D [7] and was implemented for triangular meshes in the GigaMesh Software Framework. [8]
The thin plate spline has a number of properties which have contributed to its popularity:
However, note that splines already in one dimension can cause severe "overshoots". In 2D such effects can be much more critical, because TPS are not objective.[ citation needed ]
In theoretical physics, a Feynman diagram is a pictorial representation of the mathematical expressions describing the behavior and interaction of subatomic particles. The scheme is named after American physicist Richard Feynman, who introduced the diagrams in 1948. The interaction of subatomic particles can be complex and difficult to understand; Feynman diagrams give a simple visualization of what would otherwise be an arcane and abstract formula. According to David Kaiser, "Since the middle of the 20th century, theoretical physicists have increasingly turned to this tool to help them undertake critical calculations. Feynman diagrams have revolutionized nearly every aspect of theoretical physics." While the diagrams are applied primarily to quantum field theory, they can also be used in other areas of physics, such as solid-state theory. Frank Wilczek wrote that the calculations that won him the 2004 Nobel Prize in Physics "would have been literally unthinkable without Feynman diagrams, as would [Wilczek's] calculations that established a route to production and observation of the Higgs particle."
In differential geometry, the Lie derivative, named after Sophus Lie by Władysław Ślebodziński, evaluates the change of a tensor field, along the flow defined by another vector field. This change is coordinate invariant and therefore the Lie derivative is defined on any differentiable manifold.
In mathematics, a submersion is a differentiable map between differentiable manifolds whose differential is everywhere surjective. This is a basic concept in differential topology. The notion of a submersion is dual to the notion of an immersion.
In mathematical analysis, the smoothness of a function is a property measured by the number of continuous derivatives it has over its domain.
In mathematics a radial basis function (RBF) is a real-valued function whose value depends only on the distance between the input and some fixed point, either the origin, so that , or some other fixed point , called a center, so that . Any function that satisfies the property is a radial function. The distance is usually Euclidean distance, although other metrics are sometimes used. They are often used as a collection which forms a basis for some function space of interest, hence the name.
A parametric surface is a surface in the Euclidean space which is defined by a parametric equation with two parameters . Parametric representation is a very general way to specify a surface, as well as implicit representation. Surfaces that occur in two of the main theorems of vector calculus, Stokes' theorem and the divergence theorem, are frequently given in a parametric form. The curvature and arc length of curves on the surface, surface area, differential geometric invariants such as the first and second fundamental forms, Gaussian, mean, and principal curvatures can all be computed from a given parametrization.
In statistics, a generalized additive model (GAM) is a generalized linear model in which the linear response variable depends linearly on unknown smooth functions of some predictor variables, and interest focuses on inference about these smooth functions.
In applied mathematics, polyharmonic splines are used for function approximation and data interpolation. They are very useful for interpolating and fitting scattered data in many dimensions. Special cases include thin plate splines and natural cubic splines in one dimension.
Active contour model, also called snakes, is a framework in computer vision introduced by Michael Kass, Andrew Witkin, and Demetri Terzopoulos for delineating an object outline from a possibly noisy 2D image. The snakes model is popular in computer vision, and snakes are widely used in applications like object tracking, shape recognition, segmentation, edge detection and stereo matching.
The Dirac bracket is a generalization of the Poisson bracket developed by Paul Dirac to treat classical systems with second class constraints in Hamiltonian mechanics, and to thus allow them to undergo canonical quantization. It is an important part of Dirac's development of Hamiltonian mechanics to elegantly handle more general Lagrangians; specifically, when constraints are at hand, so that the number of apparent variables exceeds that of dynamical ones. More abstractly, the two-form implied from the Dirac bracket is the restriction of the symplectic form to the constraint surface in phase space.
Shape context is a feature descriptor used in object recognition. Serge Belongie and Jitendra Malik proposed the term in their paper "Matching with Shape Contexts" in 2000.
Smoothing splines are function estimates, , obtained from a set of noisy observations of the target , in order to balance a measure of goodness of fit of to with a derivative based measure of the smoothness of . They provide a means for smoothing noisy data. The most familiar example is the cubic smoothing spline, but there are many other possibilities, including for the case where is a vector quantity.
In algebraic geometry, a morphism between algebraic varieties is a function between the varieties that is given locally by polynomials. It is also called a regular map. A morphism from an algebraic variety to the affine line is also called a regular function. A regular map whose inverse is also regular is called biregular, and the biregular maps are the isomorphisms of algebraic varieties. Because regular and biregular are very restrictive conditions – there are no non-constant regular functions on projective varieties – the concepts of rational and birational maps are widely used as well; they are partial functions that are defined locally by rational fractions instead of polynomials.
Least-squares support-vector machines (LS-SVM) for statistics and in statistical modeling, are least-squares versions of support-vector machines (SVM), which are a set of related supervised learning methods that analyze data and recognize patterns, and which are used for classification and regression analysis. In this version one finds the solution by solving a set of linear equations instead of a convex quadratic programming (QP) problem for classical SVMs. Least-squares SVM classifiers were proposed by Johan Suykens and Joos Vandewalle. LS-SVMs are a class of kernel-based learning methods.
In mathematics, the method of steepest descent or saddle-point method is an extension of Laplace's method for approximating an integral, where one deforms a contour integral in the complex plane to pass near a stationary point, in roughly the direction of steepest descent or stationary phase. The saddle-point approximation is used with integrals in the complex plane, whereas Laplace’s method is used with real integrals.
In algebraic geometry, a derived scheme is a homotopy-theoretic generalization of a scheme in which classical commutative rings are replaced with derived versions such as differential graded algebras, commutative simplicial rings, or commutative ring spectra.
In statistics, the variance function is a smooth function that depicts the variance of a random quantity as a function of its mean. The variance function is a measure of heteroscedasticity and plays a large role in many settings of statistical modelling. It is a main ingredient in the generalized linear model framework and a tool used in non-parametric regression, semiparametric regression and functional data analysis. In parametric modeling, variance functions take on a parametric form and explicitly describe the relationship between the variance and the mean of a random quantity. In a non-parametric setting, the variance function is assumed to be a smooth function.
In machine learning and mathematical optimization, loss functions for classification are computationally feasible loss functions representing the price paid for inaccuracy of predictions in classification problems. Given as the space of all possible inputs, and as the set of labels, a typical goal of classification algorithms is to find a function which best predicts a label for a given input . However, because of incomplete information, noise in the measurement, or probabilistic components in the underlying process, it is possible for the same to generate different . As a result, the goal of the learning problem is to minimize expected loss, defined as
The exact thin plate energy functional (TPEF) for a function is
Batch normalization is a method used to make training of artificial neural networks faster and more stable through normalization of the layers' inputs by re-centering and re-scaling. It was proposed by Sergey Ioffe and Christian Szegedy in 2015.
{{citation}}
: CS1 maint: location missing publisher (link)