In computer vision, pattern recognition, and robotics, point-set registration, also known as point-cloud registration or scan matching, is the process of finding a spatial transformation (e.g., scaling, rotation and translation) that aligns two point clouds. The purpose of finding such a transformation includes merging multiple data sets into a globally consistent model (or coordinate frame), and mapping a new measurement to a known data set to identify features or to estimate its pose. Raw 3D point cloud data are typically obtained from Lidars and RGB-D cameras. 3D point clouds can also be generated from computer vision algorithms such as triangulation, bundle adjustment, and more recently, monocular image depth estimation using deep learning. For 2D point set registration used in image processing and feature-based image registration, a point set may be 2D pixel coordinates obtained by feature extraction from an image, for example corner detection. Point cloud registration has extensive applications in autonomous driving, [1] motion estimation and 3D reconstruction, [2] object detection and pose estimation, [3] [4] robotic manipulation, [5] simultaneous localization and mapping (SLAM), [6] [7] panorama stitching, [8] virtual and augmented reality, [9] and medical imaging. [10]
As a special case, registration of two point sets that only differ by a 3D rotation (i.e., there is no scaling and translation), is called the Wahba Problem and also related to the orthogonal procrustes problem.
The problem may be summarized as follows: [11] Let be two finite size point sets in a finite-dimensional real vector space , which contain and points respectively (e.g., recovers the typical case of when and are 3D point sets). The problem is to find a transformation to be applied to the moving "model" point set such that the difference (typically defined in the sense of point-wise Euclidean distance) between and the static "scene" set is minimized. In other words, a mapping from to is desired which yields the best alignment between the transformed "model" set and the "scene" set. The mapping may consist of a rigid or non-rigid transformation. The transformation model may be written as , using which the transformed, registered model point set is:
(1) |
The output of a point set registration algorithm is therefore the optimal transformation such that is best aligned to , according to some defined notion of distance function :
(2) |
where is used to denote the set of all possible transformations that the optimization tries to search for. The most popular choice of the distance function is to take the square of the Euclidean distance for every pair of points:
(3) |
where denotes the vector 2-norm, is the corresponding point in set that attains the shortest distance to a given point in set after transformation. Minimizing such a function in rigid registration is equivalent to solving a least squares problem.
When the correspondences (i.e.,) are given before the optimization, for example, using feature matching techniques, then the optimization only needs to estimate the transformation. This type of registration is called correspondence-based registration. On the other hand, if the correspondences are unknown, then the optimization is required to jointly find out the correspondences and transformation together. This type of registration is called simultaneous pose and correspondence registration.
Given two point sets, rigid registration yields a rigid transformation which maps one point set to the other. A rigid transformation is defined as a transformation that does not change the distance between any two points. Typically such a transformation consists of translation and rotation. [12] In rare cases, the point set may also be mirrored. In robotics and computer vision, rigid registration has the most applications.
Given two point sets, non-rigid registration yields a non-rigid transformation which maps one point set to the other. Non-rigid transformations include affine transformations such as scaling and shear mapping. However, in the context of point set registration, non-rigid registration typically involves nonlinear transformation. If the eigenmodes of variation of the point set are known, the nonlinear transformation may be parametrized by the eigenvalues. [13] A nonlinear transformation may also be parametrized as a thin plate spline. [14] [13]
Some approaches to point set registration use algorithms that solve the more general graph matching problem. [11] However, the computational complexity of such methods tend to be high and they are limited to rigid registrations. In this article, we will only consider algorithms for rigid registration, where the transformation is assumed to contain 3D rotations and translations (possibly also including a uniform scaling).
The PCL (Point Cloud Library) is an open-source framework for n-dimensional point cloud and 3D geometry processing. It includes several point registration algorithms. [15]
Correspondence-based methods assume the putative correspondences are given for every point . Therefore, we arrive at a setting where both point sets and have points and the correspondences are given.
In the simplest case, one can assume that all the correspondences are correct, meaning that the points are generated as follows:
(cb.1) |
where is a uniform scaling factor (in many cases is assumed), is a proper 3D rotation matrix ( is the special orthogonal group of degree ), is a 3D translation vector and models the unknown additive noise (e.g., Gaussian noise). Specifically, if the noise is assumed to follow a zero-mean isotropic Gaussian distribution with standard deviation , i.e.,, then the following optimization can be shown to yield the maximum likelihood estimate for the unknown scale, rotation and translation:
(cb.2) |
Note that when the scaling factor is 1 and the translation vector is zero, then the optimization recovers the formulation of the Wahba problem. Despite the non-convexity of the optimization ( cb.2 ) due to non-convexity of the set , seminal work by Berthold K.P. Horn showed that ( cb.2 ) actually admits a closed-form solution, by decoupling the estimation of scale, rotation and translation. [16] Similar results were discovered by Arun et al. [17] In addition, in order to find a unique transformation , at least non-collinear points in each point set are required.
More recently, Briales and Gonzalez-Jimenez have developed a semidefinite relaxation using Lagrangian duality, for the case where the model set contains different 3D primitives such as points, lines and planes (which is the case when the model is a 3D mesh). [18] Interestingly, the semidefinite relaxation is empirically tight, i.e., a certifiably globally optimal solution can be extracted from the solution of the semidefinite relaxation.
The least squares formulation ( cb.2 ) is known to perform arbitrarily badly in the presence of outliers. An outlier correspondence is a pair of measurements that departs from the generative model ( cb.1 ). In this case, one can consider a different generative model as follows: [19]
(cb.3) |
where if the th pair is an inlier, then it obeys the outlier-free model ( cb.1 ), i.e., is obtained from by a spatial transformation plus some small noise; however, if the th pair is an outlier, then can be any arbitrary vector . Since one does not know which correspondences are outliers beforehand, robust registration under the generative model ( cb.3 ) is of paramount importance for computer vision and robotics deployed in the real world, because current feature matching techniques tend to output highly corrupted correspondences where over of the correspondences can be outliers. [20]
Next, we describe several common paradigms for robust registration.
Maximum consensus seeks to find the largest set of correspondences that are consistent with the generative model ( cb.1 ) for some choice of spatial transformation . Formally speaking, maximum consensus solves the following optimization:
(cb.4) |
where denotes the cardinality of the set . The constraint in ( cb.4 ) enforces that every pair of measurements in the inlier set must have residuals smaller than a pre-defined threshold . Unfortunately, recent analyses have shown that globally solving problem (cb.4) is NP-Hard, and global algorithms typically have to resort to branch-and-bound (BnB) techniques that take exponential-time complexity in the worst case. [21] [22] [23] [24] [25]
Although solving consensus maximization exactly is hard, there exist efficient heuristics that perform quite well in practice. One of the most popular heuristics is the Random Sample Consensus (RANSAC) scheme. [26] RANSAC is an iterative hypothesize-and-verify method. At each iteration, the method first randomly samples 3 out of the total number of correspondences and computes a hypothesis using Horn's method, [16] then the method evaluates the constraints in ( cb.4 ) to count how many correspondences actually agree with such a hypothesis (i.e., it computes the residual and compares it with the threshold for each pair of measurements). The algorithm terminates either after it has found a consensus set that has enough correspondences, or after it has reached the total number of allowed iterations. RANSAC is highly efficient because the main computation of each iteration is carrying out the closed-form solution in Horn's method. However, RANSAC is non-deterministic and only works well in the low-outlier-ratio regime (e.g., below ), because its runtime grows exponentially with respect to the outlier ratio. [20]
To fill the gap between the fast but inexact RANSAC scheme and the exact but exhaustive BnB optimization, recent researches have developed deterministic approximate methods to solve consensus maximization. [21] [22] [27] [23]
Outlier removal methods seek to pre-process the set of highly corrupted correspondences before estimating the spatial transformation. The motivation of outlier removal is to significantly reduce the number of outlier correspondences, while maintaining inlier correspondences, so that optimization over the transformation becomes easier and more efficient (e.g., RANSAC works poorly when the outlier ratio is above but performs quite well when outlier ratio is below ).
Parra et al. have proposed a method called Guaranteed Outlier Removal (GORE) that uses geometric constraints to prune outlier correspondences while guaranteeing to preserve inlier correspondences. [20] GORE has been shown to be able to drastically reduce the outlier ratio, which can significantly boost the performance of consensus maximization using RANSAC or BnB. Yang and Carlone have proposed to build pairwise translation-and-rotation-invariant measurements (TRIMs) from the original set of measurements and embed TRIMs as the edges of a graph whose nodes are the 3D points. Since inliers are pairwise consistent in terms of the scale, they must form a clique within the graph. Therefore, using efficient algorithms for computing the maximum clique of a graph can find the inliers and effectively prune the outliers. [4] The maximum clique based outlier removal method is also shown to be quite useful in real-world point set registration problems. [19] Similar outlier removal ideas were also proposed by Parra et al.. [28]
M-estimation replaces the least squares objective function in ( cb.2 ) with a robust cost function that is less sensitive to outliers. Formally, M-estimation seeks to solve the following problem:
(cb.5) |
where represents the choice of the robust cost function. Note that choosing recovers the least squares estimation in ( cb.2 ). Popular robust cost functions include -norm loss, Huber loss, [29] Geman-McClure loss [30] and truncated least squares loss. [19] [8] [4] M-estimation has been one of the most popular paradigms for robust estimation in robotics and computer vision. [31] [32] Because robust objective functions are typically non-convex (e.g., the truncated least squares loss v.s. the least squares loss), algorithms for solving the non-convex M-estimation are typically based on local optimization, where first an initial guess is provided, following by iterative refinements of the transformation to keep decreasing the objective function. Local optimization tends to work well when the initial guess is close to the global minimum, but it is also prone to get stuck in local minima if provided with poor initialization.
Graduated non-convexity (GNC) is a general-purpose framework for solving non-convex optimization problems without initialization. It has achieved success in early vision and machine learning applications. [33] [34] The key idea behind GNC is to solve the hard non-convex problem by starting from an easy convex problem. Specifically, for a given robust cost function , one can construct a surrogate function with a hyper-parameter , tuning which can gradually increase the non-convexity of the surrogate function until it converges to the target function . [34] [35] Therefore, at each level of the hyper-parameter , the following optimization is solved:
(cb.6) |
Black and Rangarajan proved that the objective function of each optimization ( cb.6 ) can be dualized into a sum of weighted least squares and a so-called outlier process function on the weights that determine the confidence of the optimization in each pair of measurements. [33] Using Black-Rangarajan duality and GNC tailored for the Geman-McClure function, Zhou et al. developed the fast global registration algorithm that is robust against about outliers in the correspondences. [30] More recently, Yang et al. showed that the joint use of GNC (tailored to the Geman-McClure function and the truncated least squares function) and Black-Rangarajan duality can lead to a general-purpose solver for robust registration problems, including point clouds and mesh registration. [35]
Almost none of the robust registration algorithms mentioned above (except the BnB algorithm that runs in exponential-time in the worst case) comes with performance guarantees, which means that these algorithms can return completely incorrect estimates without notice. Therefore, these algorithms are undesirable for safety-critical applications like autonomous driving.
Very recently, Yang et al. has developed the first certifiably robust registration algorithm, named Truncated least squares Estimation And SEmidefinite Relaxation (TEASER). [19] For point cloud registration, TEASER not only outputs an estimate of the transformation, but also quantifies the optimality of the given estimate. TEASER adopts the following truncated least squares (TLS) estimator:
(cb.7) |
which is obtained by choosing the TLS robust cost function , where is a pre-defined constant that determines the maximum allowed residuals to be considered inliers. The TLS objective function has the property that for inlier correspondences (), the usual least square penalty is applied; while for outlier correspondences (), no penalty is applied and the outliers are discarded. If the TLS optimization ( cb.7 ) is solved to global optimality, then it is equivalent to running Horn's method on only the inlier correspondences.
However, solving ( cb.7 ) is quite challenging due to its combinatorial nature. TEASER solves ( cb.7 ) as follows : (i) It builds invariant measurements such that the estimation of scale, rotation and translation can be decoupled and solved separately, a strategy that is inspired by the original Horn's method; (ii) The same TLS estimation is applied for each of the three sub-problems, where the scale TLS problem can be solved exactly using an algorithm called adaptive voting, the rotation TLS problem can relaxed to a semidefinite program (SDP) where the relaxation is exact in practice, [8] even with large amount of outliers; the translation TLS problem can solved using component-wise adaptive voting. A fast implementation leveraging GNC is open-sourced here. In practice, TEASER can tolerate more than outlier correspondences and runs in milliseconds.
In addition to developing TEASER, Yang et al. also prove that, under some mild conditions on the point cloud data, TEASER's estimated transformation has bounded errors from the ground-truth transformation. [19]
The iterative closest point (ICP) algorithm was introduced by Besl and McKay. [36] The algorithm performs rigid registration in an iterative fashion by alternating in (i) given the transformation, finding the closest point in for every point in ; and (ii) given the correspondences, finding the best rigid transformation by solving the least squares problem ( cb.2 ). As such, it works best if the initial pose of is sufficiently close to . In pseudocode, the basic algorithm is implemented as follows:
algorithmICP(M, S)θ := θ0while not registered: X := ∅formi ∊ T(M, θ):ŝi := closest point in S to miX := X + ⟨mi, ŝi⟩θ := least_squares(X) returnθ
Here, the function least_squares
performs least squares optimization to minimize the distance in each of the pairs, using the closed-form solutions by Horn [16] and Arun. [17]
Because the cost function of registration depends on finding the closest point in to every point in , it can change as the algorithm is running. As such, it is difficult to prove that ICP will in fact converge exactly to the local optimum. [37] In fact, empirically, ICP and EM-ICP do not converge to the local minimum of the cost function. [37] Nonetheless, because ICP is intuitive to understand and straightforward to implement, it remains the most commonly used point set registration algorithm. [37] Many variants of ICP have been proposed, affecting all phases of the algorithm from the selection and matching of points to the minimization strategy. [13] [38] For example, the expectation maximization algorithm is applied to the ICP algorithm to form the EM-ICP method, and the Levenberg-Marquardt algorithm is applied to the ICP algorithm to form the LM-ICP method. [12]
Robust point matching (RPM) was introduced by Gold et al. [39] The method performs registration using deterministic annealing and soft assignment of correspondences between point sets. Whereas in ICP the correspondence generated by the nearest-neighbour heuristic is binary, RPM uses a soft correspondence where the correspondence between any two points can be anywhere from 0 to 1, although it ultimately converges to either 0 or 1. The correspondences found in RPM is always one-to-one, which is not always the case in ICP. [14] Let be the th point in and be the th point in . The match matrix is defined as such:
(rpm.1) |
The problem is then defined as: Given two point sets and find the Affine transformation and the match matrix that best relates them. [39] Knowing the optimal transformation makes it easy to determine the match matrix, and vice versa. However, the RPM algorithm determines both simultaneously. The transformation may be decomposed into a translation vector and a transformation matrix:
The matrix in 2D is composed of four separate parameters , which are scale, rotation, and the vertical and horizontal shear components respectively. The cost function is then:
(rpm.2) |
subject to , , . The term biases the objective towards stronger correlation by decreasing the cost if the match matrix has more ones in it. The function serves to regularize the Affine transformation by penalizing large values of the scale and shear components:
for some regularization parameter .
The RPM method optimizes the cost function using the Softassign algorithm. The 1D case will be derived here. Given a set of variables where . A variable is associated with each such that . The goal is to find that maximizes . This can be formulated as a continuous problem by introducing a control parameter . In the deterministic annealing method, the control parameter is slowly increased as the algorithm runs. Let be:
(rpm.3) |
this is known as the softmax function. As increases, it approaches a binary value as desired in Equation ( rpm.1 ). The problem may now be generalized to the 2D case, where instead of maximizing , the following is maximized:
(rpm.4) |
where
This is straightforward, except that now the constraints on are doubly stochastic matrix constraints: and . As such the denominator from Equation ( rpm.3 ) cannot be expressed for the 2D case simply. To satisfy the constraints, it is possible to use a result due to Sinkhorn, [39] which states that a doubly stochastic matrix is obtained from any square matrix with all positive entries by the iterative process of alternating row and column normalizations. Thus the algorithm is written as such: [39]
algorithm RPM2Dt := 0a, θb, c := 0β := β0whileβ < βf:whileμ has not converged: // update correspondence parameters by softassign// apply Sinkhorn's methodwhile has not converged:// update by normalizing across all rows:// update by normalizing across all columns:// update pose parameters by coordinate descent update θ using analytical solution update t using analytical solution update a, b, c using Newton's method returna, b, c, θ and t
where the deterministic annealing control parameter is initially set to and increases by factor until it reaches the maximum value . The summations in the normalization steps sum to and instead of just and because the constraints on are inequalities. As such the th and th elements are slack variables.
The algorithm can also be extended for point sets in 3D or higher dimensions. The constraints on the correspondence matrix are the same in the 3D case as in the 2D case. Hence the structure of the algorithm remains unchanged, with the main difference being how the rotation and translation matrices are solved. [39]
The thin plate spline robust point matching (TPS-RPM) algorithm by Chui and Rangarajan augments the RPM method to perform non-rigid registration by parametrizing the transformation as a thin plate spline. [14] However, because the thin plate spline parametrization only exists in three dimensions, the method cannot be extended to problems involving four or more dimensions.
The kernel correlation (KC) approach of point set registration was introduced by Tsin and Kanade. [37] Compared with ICP, the KC algorithm is more robust against noisy data. Unlike ICP, where, for every model point, only the closest scene point is considered, here every scene point affects every model point. [37] As such this is a multiply-linked registration algorithm. For some kernel function , the kernel correlation of two points is defined thus: [37]
(kc.1) |
The kernel function chosen for point set registration is typically symmetric and non-negative kernel, similar to the ones used in the Parzen window density estimation. The Gaussian kernel typically used for its simplicity, although other ones like the Epanechnikov kernel and the tricube kernel may be substituted. [37] The kernel correlation of an entire point set is defined as the sum of the kernel correlations of every point in the set to every other point in the set: [37]
(kc.2) |
The logarithm of KC of a point set is proportional, within a constant factor, to the information entropy. Observe that the KC is a measure of a "compactness" of the point set—trivially, if all points in the point set were at the same location, the KC would evaluate to a large value. The cost function of the point set registration algorithm for some transformation parameter is defined thus:
(kc.3) |
Some algebraic manipulation yields:
(kc.4) |
The expression is simplified by observing that is independent of . Furthermore, assuming rigid registration, is invariant when is changed because the Euclidean distance between every pair of points stays the same under rigid transformation. So the above equation may be rewritten as:
(kc.5) |
The kernel density estimates are defined as:
The cost function can then be shown to be the correlation of the two kernel density estimates:
(kc.6) |
Having established the cost function, the algorithm simply uses gradient descent to find the optimal transformation. It is computationally expensive to compute the cost function from scratch on every iteration, so a discrete version of the cost function Equation ( kc.6 ) is used. The kernel density estimates can be evaluated at grid points and stored in a lookup table. Unlike the ICP and related methods, it is not necessary to find the nearest neighbour, which allows the KC algorithm to be comparatively simple in implementation.
Compared to ICP and EM-ICP for noisy 2D and 3D point sets, the KC algorithm is less sensitive to noise and results in correct registration more often. [37]
The kernel density estimates are sums of Gaussians and may therefore be represented as Gaussian mixture models (GMM). [40] Jian and Vemuri use the GMM version of the KC registration algorithm to perform non-rigid registration parametrized by thin plate splines.
Coherent point drift (CPD) was introduced by Myronenko and Song. [13] [41] The algorithm takes a probabilistic approach to aligning point sets, similar to the GMM KC method. Unlike earlier approaches to non-rigid registration which assume a thin plate spline transformation model, CPD is agnostic with regard to the transformation model used. The point set represents the Gaussian mixture model (GMM) centroids. When the two point sets are optimally aligned, the correspondence is the maximum of the GMM posterior probability for a given data point. To preserve the topological structure of the point sets, the GMM centroids are forced to move coherently as a group. The expectation maximization algorithm is used to optimize the cost function. [13]
Let there be M points in and N points in . The GMM probability density function for a point s is:
(cpd.1) |
where, in D dimensions, is the Gaussian distribution centered on point .
The membership probabilities is equal for all GMM components. The weight of the uniform distribution is denoted as . The mixture model is then:
(cpd.2) |
The GMM centroids are re-parametrized by a set of parameters estimated by maximizing the likelihood. This is equivalent to minimizing the negative log-likelihood function:
(cpd.3) |
where it is assumed that the data is independent and identically distributed. The correspondence probability between two points and is defined as the posterior probability of the GMM centroid given the data point:
The expectation maximization (EM) algorithm is used to find and . The EM algorithm consists of two steps. First, in the E-step or estimation step, it guesses the values of parameters ("old" parameter values) and then uses Bayes' theorem to compute the posterior probability distributions of mixture components. Second, in the M-step or maximization step, the "new" parameter values are then found by minimizing the expectation of the complete negative log-likelihood function, i.e. the cost function:
(cpd.4) |
Ignoring constants independent of and , Equation ( cpd.4 ) can be expressed thus:
(cpd.5) |
where
with only if . The posterior probabilities of GMM components computed using previous parameter values is:
(cpd.6) |
Minimizing the cost function in Equation ( cpd.5 ) necessarily decreases the negative log-likelihood function E in Equation ( cpd.3 ) unless it is already at a local minimum. [13] Thus, the algorithm can be expressed using the following pseudocode, where the point sets and are represented as and matrices and respectively: [13]
algorithm CPDθ := θ0initialize 0 ≤ w ≤ 1while not registered: // E-step, compute Pfori ∊ [1, M] and j ∊ [1, N]:// M-step, solve for optimal transformation{θ, σ2} := solve(S, M, P)returnθ
where the vector is a column vector of ones. The solve
function differs by the type of registration performed. For example, in rigid registration, the output is a scale a, a rotation matrix , and a translation vector . The parameter can be written as a tuple of these:
which is initialized to one, the identity matrix, and a column vector of zeroes:
The aligned point set is:
The solve_rigid
function for rigid registration can then be written as follows, with derivation of the algebra explained in Myronenko's 2010 paper. [13]
solve_rigid(S, M, P)NP := 1TP1U, V := svd(A)// the singular value decomposition ofA = UΣVTC := diag(1, …, 1, det(UVT))// diag(ξ)is the diagonal matrix formed from vector ξR := UCVT//tris the trace of a matrixt := μs − aRμmreturn {a, R, t}, σ2
For affine registration, where the goal is to find an affine transformation instead of a rigid one, the output is an affine transformation matrix and a translation such that the aligned point set is:
The solve_affine
function for rigid registration can then be written as follows, with derivation of the algebra explained in Myronenko's 2010 paper. [13]
solve_affine(S, M, P)NP := 1TP1t := μs − Bμmreturn {B, t}, σ2
It is also possible to use CPD with non-rigid registration using a parametrization derived using calculus of variations. [13]
Sums of Gaussian distributions can be computed in linear time using the fast Gauss transform (FGT). [13] Consequently, the time complexity of CPD is , which is asymptotically much faster than methods. [13]
A variant of coherent point drift, called Bayesian coherent point drift (BCPD), was derived through a Bayesian formulation of point set registration. [42] BCPD has several advantages over CPD, e.g., (1) nonrigid and rigid registrations can be performed in a single algorithm, (2) the algorithm can be accelerated regardless of the Gaussianity of a Gram matrix to define motion coherence, (3) the algorithm is more robust against outliers because of a more reasonable definition of an outlier distribution. Additionally, in the Bayesian formulation, motion coherence was introduced through a prior distribution of displacement vectors, providing a clear difference between tuning parameters that control motion coherence. BCPD was further accelerated by a method called BCPD++, which is a three-step procedure composed of (1) downsampling of point sets, (2) registration of downsampled point sets, and (3) interpolation of a deformation field. [43] The method can register point sets composed of more than 10M points while maintaining its registration accuracy.
An variant of coherent point drift called CPD with Local Surface Geometry (LSG-CPD) for rigid point cloud registration. [44] The method adaptively adds different levels of point-to-plane penalization on top of the point-to-point penalization based on the flatness of the local surface. This results in GMM components with anisotropic covariances, instead of the isotropic covariances in the original CPD. [13] The anisotropic covariance matrix is modeled as:
(lsg-cpd.1) |
where
(lsg-cpd.2) |
is the anisotropic covariance matrix of the m-th point in the target set; is the normal vector corresponding to the same point; is an identity matrix, serving as a regularizer, pulling the problem away from ill-posedness. is penalization coefficient (a modified sigmoid function), which is set adaptively to add different levels of point-to-plane penalization depending on how flat the local surface is. This is realized by evaluating the surface variation [45] within the neighborhood of the m-th target point. is the upper bound of the penalization.
The point cloud registration is formulated as a maximum likelihood estimation (MLE) problem and solve it with the Expectation-Maximization (EM) algorithm. In the E step, the correspondence computation is recast into simple matrix manipulations and efficiently computed on a GPU. In the M step, an unconstrained optimization on a matrix Lie group is designed to efficiently update the rigid transformation of the registration. Taking advantage of the local geometric covariances, the method shows a superior performance in accuracy and robustness to noise and outliers, compared with the baseline CPD. [46] An enhanced runtime performance is expected thanks to the GPU accelerated correspondence calculation. An implementation of the LSG-CPD is open-sourced here.
This algorithm was introduced in 2013 by H. Assalih to accommodate sonar image registration. [47] These types of images tend to have high amounts of noise, so it is expected to have many outliers in the point sets to match. SCS delivers high robustness against outliers and can surpass ICP and CPD performance in the presence of outliers. SCS doesn't use iterative optimization in high dimensional space and is neither probabilistic nor spectral. SCS can match rigid and non-rigid transformations, and performs best when the target transformation is between three and six degrees of freedom.
In physics, the Lorentz transformations are a six-parameter family of linear transformations from a coordinate frame in spacetime to another frame that moves at a constant velocity relative to the former. The respective inverse transformation is then parameterized by the negative of this velocity. The transformations are named after the Dutch physicist Hendrik Lorentz.
In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. One definition is that a random vector is said to be k-variate normally distributed if every linear combination of its k components has a univariate normal distribution. Its importance derives mainly from the multivariate central limit theorem. The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly) correlated real-valued random variables, each of which clusters around a mean value.
In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed statistical model, the observed data is most probable. The point in the parameter space that maximizes the likelihood function is called the maximum likelihood estimate. The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means of statistical inference.
Noether's theorem states that every continuous symmetry of the action of a physical system with conservative forces has a corresponding conservation law. This is the first of two theorems published by mathematician Emmy Noether in 1918. The action of a physical system is the integral over time of a Lagrangian function, from which the system's behavior can be determined by the principle of least action. This theorem only applies to continuous and smooth symmetries of physical space.
In probability theory and statistics, a Gaussian process is a stochastic process, such that every finite collection of those random variables has a multivariate normal distribution. The distribution of a Gaussian process is the joint distribution of all those random variables, and as such, it is a distribution over functions with a continuous domain, e.g. time or space.
In statistics, an expectation–maximization (EM) algorithm is an iterative method to find (local) maximum likelihood or maximum a posteriori (MAP) estimates of parameters in statistical models, where the model depends on unobserved latent variables. The EM iteration alternates between performing an expectation (E) step, which creates a function for the expectation of the log-likelihood evaluated using the current estimate for the parameters, and a maximization (M) step, which computes parameters maximizing the expected log-likelihood found on the E step. These parameter-estimates are then used to determine the distribution of the latent variables in the next E step. It can be used, for example, to estimate a mixture of gaussians, or to solve the multiple linear regression problem.
In statistics, a generalized linear model (GLM) is a flexible generalization of ordinary linear regression. The GLM generalizes linear regression by allowing the linear model to be related to the response variable via a link function and by allowing the magnitude of the variance of each measurement to be a function of its predicted value.
Variational Bayesian methods are a family of techniques for approximating intractable integrals arising in Bayesian inference and machine learning. They are typically used in complex statistical models consisting of observed variables as well as unknown parameters and latent variables, with various sorts of relationships among the three types of random variables, as might be described by a graphical model. As typical in Bayesian inference, the parameters and latent variables are grouped together as "unobserved variables". Variational Bayesian methods are primarily used for two purposes:
In rotordynamics, the rigid rotor is a mechanical model of rotating systems. An arbitrary rigid rotor is a 3-dimensional rigid object, such as a top. To orient such an object in space requires three angles, known as Euler angles. A special rigid rotor is the linear rotor requiring only two angles to describe, for example of a diatomic molecule. More general molecules are 3-dimensional, such as water, ammonia, or methane.
Robust statistics are statistics that maintain their properties even if the underlying distributional assumptions are incorrect. Robust statistical methods have been developed for many common problems, such as estimating location, scale, and regression parameters. One motivation is to produce statistical methods that are not unduly affected by outliers. Another motivation is to provide methods with good performance when there are small departures from a parametric distribution. For example, robust methods work well for mixtures of two normal distributions with different standard deviations; under this model, non-robust methods like a t-test work poorly.
Covariance matrix adaptation evolution strategy (CMA-ES) is a particular kind of strategy for numerical optimization. Evolution strategies (ES) are stochastic, derivative-free methods for numerical optimization of non-linear or non-convex continuous optimization problems. They belong to the class of evolutionary algorithms and evolutionary computation. An evolutionary algorithm is broadly based on the principle of biological evolution, namely the repeated interplay of variation and selection: in each generation (iteration) new individuals are generated by variation of the current parental individuals, usually in a stochastic way. Then, some individuals are selected to become the parents in the next generation based on their fitness or objective function value . Like this, individuals with better and better -values are generated over the generation sequence.
In the fields of computer vision and image analysis, the Harris affine region detector belongs to the category of feature detection. Feature detection is a preprocessing step of several algorithms that rely on identifying characteristic points or interest points so to make correspondences between images, recognize textures, categorize objects or build panoramas.
In physics, a gauge theory is a type of field theory in which the Lagrangian, and hence the dynamics of the system itself, do not change under local transformations according to certain smooth families of operations. Formally, the Lagrangian is invariant under these transformations.
In statistics, efficiency is a measure of quality of an estimator, of an experimental design, or of a hypothesis testing procedure. Essentially, a more efficient estimator needs fewer input data or observations than a less efficient one to achieve the Cramér–Rao bound. An efficient estimator is characterized by having the smallest possible variance, indicating that there is a small deviance between the estimated value and the "true" value in the L2 norm sense.
In physics, Berry connection and Berry curvature are related concepts which can be viewed, respectively, as a local gauge potential and gauge field associated with the Berry phase or geometric phase. The concept was first introduced by S. Pancharatnam as geometric phase and later elaborately explained and popularized by Michael Berry in a paper published in 1984 emphasizing how geometric phases provide a powerful unifying concept in several branches of classical and quantum physics.
Symmetries in quantum mechanics describe features of spacetime and particles which are unchanged under some transformation, in the context of quantum mechanics, relativistic quantum mechanics and quantum field theory, and with applications in the mathematical formulation of the standard model and condensed matter physics. In general, symmetry in physics, invariance, and conservation laws, are fundamentally important constraints for formulating physical theories and models. In practice, they are powerful methods for solving problems and predicting what can happen. While conservation laws do not always give the answer to the problem directly, they form the correct constraints and the first steps to solving a multitude of problems. In application, understanding symmetries can also provide insights on the eigenstates that can be expected. For example, the existence of degenerate states can be inferred by the presence of non commuting symmetry operators or that the non degenerate states are also eigenvectors of symmetry operators.
Lagrangian field theory is a formalism in classical field theory. It is the field-theoretic analogue of Lagrangian mechanics. Lagrangian mechanics is used to analyze the motion of a system of discrete particles each with a finite number of degrees of freedom. Lagrangian field theory applies to continua and fields, which have an infinite number of degrees of freedom.
In theoretical particle physics, the gluon field is a four-vector field characterizing the propagation of gluons in the strong interaction between quarks. It plays the same role in quantum chromodynamics as the electromagnetic four-potential in quantum electrodynamics – the gluon field constructs the gluon field strength tensor.
In machine learning, the kernel embedding of distributions comprises a class of nonparametric methods in which a probability distribution is represented as an element of a reproducing kernel Hilbert space (RKHS). A generalization of the individual data-point feature mapping done in classical kernel methods, the embedding of distributions into infinite-dimensional feature spaces can preserve all of the statistical features of arbitrary distributions, while allowing one to compare and manipulate distributions using Hilbert space operations such as inner products, distances, projections, linear transformations, and spectral analysis. This learning framework is very general and can be applied to distributions over any space on which a sensible kernel function may be defined. For example, various kernels have been proposed for learning from data which are: vectors in , discrete classes/categories, strings, graphs/networks, images, time series, manifolds, dynamical systems, and other structured objects. The theory behind kernel embeddings of distributions has been primarily developed by Alex Smola, Le Song , Arthur Gretton, and Bernhard Schölkopf. A review of recent works on kernel embedding of distributions can be found in.
YaDICs is a program written to perform digital image correlation on 2D and 3D tomographic images. The program was designed to be both modular, by its plugin strategy and efficient, by it multithreading strategy. It incorporates different transformations, optimizing strategy, Global and/or local shape functions ...
{{cite book}}
: |journal=
ignored (help)