This article has multiple issues. Please help improve it or discuss these issues on the talk page . (Learn how and when to remove these messages)
|
Progressive-iterative approximation method is an iterative method of data fitting with geometric meanings. [1] Given the data points to be fitted, the method obtains a series of fitting curves (surfaces) by iteratively updating the control points, and the limit curve (surface) can interpolate or approximate the given data points. [2] It avoids solving a linear system of equations directly and allows flexibility in adding constraints during the iterative process. [3] Therefore, it has been widely used in geometric design and related fields. [2]
The study of the iterative method with geometric meaning can be traced back to the work of scholars such as Prof. Dongxu Qi and Prof. Carl de Boor in the 1970s. [4] [5] In 1975, Qi et al. developed and proved the "profit and loss" algorithm for uniform cubic B-spline curves, [4] and in 1979, de Boor independently proposed this algorithm. [5] In 2004, Hongwei Lin and coauthors proved that non-uniform cubic B-spline curves and surfaces have the "profit and loss" property. [3] Later, in 2005, Lin et al. proved that the curves and surfaces with normalized and totally positive basis all have this property and named it progressive iterative approximation (PIA). [1] In 2007, Maekawa et al. changed the algebraic distance in PIA to geometric distance and named it geometric interpolation (GI). [6] In 2008, Cheng et al. extended it to subdivision surfaces and named the method progressive interpolation (PI). [7] Since the iteration steps of the PIA, GI, and PI algorithms are similar and all have geometric meanings, we collectively referred to them as geometric iterative methods (GIM). [2]
PIA is now extended to several common curves and surfaces in the geometric design field, [8] including NURBS curves and surfaces, [9] T-spline surfaces, [10] implicit curves and surfaces, [11] etc.
Generally, progressive-iterative approximation can be divided into interpolation and approximation schemes. [2] In interpolation algorithms, the number of control points is equal to that of the data points; in approximation algorithms, the number of control points can be less than that of the data points. Specifically, there are some representative iteration methods, such as local-PIA, [12] implicit-PIA, [11] fairing-PIA, [13] and isogeometric least-squares progressive-iterative approximation (IG-LSPIA), [14] which is specialized for solving the isogeometric analysis problem. [15]
To facilitate the description of the PIA iteration format for different forms of curves and surfaces, we write B-spline curves and surfaces, NURBS curves and surfaces, B-spline solids, T-spline surfaces, and triangular Bernstein–Bézier (B–B) surfaces uniformly in the following form: [17]
For example:
Given an ordered data set ,with parameters satisfying , the initial fitting curve (surface) is [1]
where the initial control points of the initial fitting curve (surface) can be randomly selected. Suppose that after the th iteration, the th fitting curve (surface) is generated by
To construct the st curve (surface),we first calculate the difference vectors,
and then update the control points by
leading to the st fitting curve (surface):
In this way, we obtain a sequence of curves (surfaces)
It has been proved that this sequence of curves (surfaces) converges to a limit curve (surface) that interpolates the give data points, [1] [9] i.e.,
For the B-spline curve and surface fitting problem, Deng and Lin proposed a least-squares progressive–iterative approximation(LSPIA), [19] which allows the number of control points to be less than that of the data points and is more suitable for large-scale data fitting problems. [10]
Assume that the number of data points is ,and the number of control points is . Following the notations in the Section above, the th fitting curve (surface) generated after the th iteration is , i.e.,
To generate the st fitting curve (surface),we compute the following difference vectors in turn: [10] [19]
Difference vectors for data points:
and,
Difference vectors for control points
where is the index set of the data points in the th group,whose parameters fall in the local support of the th basis function, i.e., . are weights that guarantee the convergence of the algorithm, usually taken as .
Finally, the control points of the st curve (surface) are updated by leading to the st fitting curve (surface) . In this way,we obtain a sequence of curve (surface),and the limit curve (surface) converges to the least-squares fitting result to the given data points. [10] [19]
In the local-PIA, the control points are divided into active and fixed control points, whose subscripts are denoted as and , respectively. [12] Assume that, the th fitting curve (surface) is ,where the fixed control point satisfy
Then,on the one hand, the iterative formula of the difference vector corresponding to the fixed control points is
On the other hand, the iterative formula of the difference vector corresponding to the active control points is
Arranging the above difference vectors into a one-dimensional sequence,
the local iteration format in matrix form is,
where, is the iteration matrix,
and are the identity matrices and
The above local iteration format converges and can be extended to blending surfaces [12] and subdivision surfaces. [20]
The progressive iterative approximation format for implicit curve and surface reconstruction is presented in the following. Given an ordered point cloud and a unit normal vector on the data points, we want to reconstruct an implicit curve (surface) from the given point cloud. To avoid trivial solution, some offset points are added to the point cloud. [11] They are offset by a distance along the unit normal vector of each point
Assume that is the value of the implicit function at the offset point
Let the implicit curve after the th iteration be
where is the control point.
Define the difference vector of data points as [11]
Next, calculate the difference vector of control coefficients
where is the convergence coefficient. As a result, the new control coefficients are
leading to the new algebraic B-spline curve
The above procedure is carried out iteratively to generate a sequence of algebraic B-spline functions . The sequence converges to a minimization problem with constraints when the initial control coefficients . [11]
Assume that the implicit surface generated after the th iteration is
the iteration format is similar to that of the curve case. [11] [21]
To develop fairing-PIA, we first define the functionals as follows: [13]
where represents the th derivative of the basis function , [8] (e.g. B-spline basis function).
Let the curve after the th iteration be
To construct the new curve ,we first calculate the st difference vectors for data points, [13]
Then, the fitting difference vectors and the fairing vectors for control points are calculated by [13]
Finally, the control points of the st curve are produced by [13]
where is a normalization weight, and is a smoothing weight corresponding to the th control point. The smoothing weights can be employed to adjust the smoothness individually, thus bringing great flexibility for smoothness. [13] The larger the smoothing weight is, the smoother the generated curve is. The new curve is obtained as follows
In this way, we obtain a sequence of curves . The sequence converges to the solution of the conventional fairing method based on energy minimization when all smoothing weights are equal ( ). [13] Similarly, the fairing-PIA can be extended to the surface case.
Given a boundary value problem [15]
where is the unknown solution, and are the differential operator and boundary operator, respectively. and are the continuous functions. In the isogeometric analysis method, NURBS basis functions [8] are used as shape functions to solve the numerical solution of this boundary value problem. [15] The same basis functions are applied to represent the numerical solution and the geometric mapping :
where denotes the NURBS basis function, is the control coefficient. After substituting the collocation points [22] into the strong form of PDE,we obtain a discretized problem [22]
where and denote the subscripts of internal and boundary collocation points, respectively.
Arranging the control coefficients of the numerical solution into an -dimensional column vector, i.e., , the discretized problem can be reformulated in matrix form
where is the collocation matrix,and is the load vector.
Assume that the discretized load values are data points to be fitted. Given the initial guess of the control coefficients(),we obtain an initial blending function [14]
where , ,represents the combination of different order derivatives of the NURBS basis functions determined using the operators and
where and indicate the interior and boundary of the parameter domain, respectively. Each corresponds to the th control coefficient. Assume that and are the index sets of the internal and boundary control coefficients, respectively. Without loss of generality, we further assume that the boundary control coefficients have been obtained using strong or weak imposition and are fixed, i.e.,
The th blending function, generated after the th iteration of IG-LSPIA, [14] is assumed to be as follows:
Then, the difference vectors for collocation points (DCP) in the st iteration are obtained using
Moreover, group all load values whose parameters fall in the local support of the th derivatives function, i.e., , into the th group corresponding to the th control coefficient, and denote the index set of the th group of load values as . Lastly, the differences for control coefficients (DCC) can be constructed as follows: [14]
where is a normalization weight to guarantee the convergence of the algorithm.
Thus, the new control coefficients are updated via the following formula,
Consequently, the st blending function is generated as follows:
The above iteration process is performed until the desired fitting precision is reached and a sequence of blending functions is obtained
The IG-LSPIA converges to the solution of a constrained least-squares collocation problem. [14]
Non-singular case
where,
The convergence of the PIA is related to the properties of the collocation matrix. If the spectral radius of iteration matrix is less than ,then the PIA is convergent. It has been shown that the PIA methods for Bézier curves and surface, B-spline curves and surfaces, NURBS curves and surfaces, Triangular Bernstein-Bézier surface, and subdivision surfaces (Loop, Catmull-Clark, Doo-Sabin) are convergent. [2]
When the matrix is nonsingular,the following results can be obtained. [23]
Lemma If ,where is the largest eigenvalue of the matrix ,then the eigenvalues of are real numbers and satisfy .
Proof Since is nonsingular,and ,then . Moreover,
In summary,.
Theorem If ,LSPIA is convergent, and converges to the least-squares fitting result to the given data points. [10] [19]
Proof From the matrix form of iterative format, we obtain the following :
According to above Lemma, the spectral radius of the matrix satisfies
Thus,the spectral radius of the iteration matrix satisfies
When
As a result,
i.e., ,which is equivalent to the normal equation of the fitting problem. Hence, the LSPIA algorithm converges to the least squares result for a given sequence of points.
Lin et al. showed that LSPIA converges even when the iteration matrix is singular. [17]
Since PIA has obvious geometric meaning, constraints can be easily integrated in the iterations. Currently, PIA has been widely applied in many fields, such as data fitting, reverse engineering, geometric design, mesh generation, data compression, fairing curve and surface generation, and isogeometric analysis.
Data fitting
Implicit reconstruction
For implicit curve and surface reconstruction, the PIA avoids the additional zero level set and regularization term, which greatly improves the speed of the reconstruction algorithm. [11]
Offset curve approximation
Firstly, the data points are sampled on the original curve. Then, the initial polynomial approximation curve or rational approximation curve of the offset curve is generated from these sampled points. Finally, the offset curve is approximated iteratively using the PIA method. [33]
Mesh generation
Input a triangular mesh model, the algorithm first constructs the initial hexahedral mesh, and extracts the quadrilateral mesh of the surface as the initial boundary mesh. During the iterations, the movement of each mesh vertex is constrained to ensure the validity of the mesh. Finally, the hexahedral model is fitted to the given input model. The algorithm can guarantee the validity of the generated hexahedral mesh, i.e., the Jacobi value at each mesh vertex is greater than zero. [34]
Data compression
First, the image data are converted into a one-dimensional sequence by Hilbert scan; then, these data points are fitted by LSPIA to generate a Hilbert curve; finally, the Hilbert curve is sampled, and the compressed image can be reconstructed. This method can well preserve the neighborhood information of pixels. [35]
Fairing curve and surface generation
Given a data point set, we first define the fairing functional, and calculate the fitting difference vector and the fairing vector of the control point; then, adjust the control points with fairing weights. According to the above steps, the fairing curve and surface can be generated iteratively. Due to the sufficient fairing parameters, the method can achieve global or local fairing. It is also flexible to adjust knot vectors, fairing weights, or data parameterization after each round of iteration. The traditional energy-minimization method is a special case of this method, i.e., when the smooth weights are all the same. [13]
Isogeometric analysis
The discretized load values are regarded as the set of data points, and the combination of the basis functions and their derivative functions is used as the blending function for fitting. The method automatically adjusts the degrees of freedom of the numerical solution of the partial differential equation according to the fitting result of the blending function to the load values. In addition, the average iteration time per step is only related to the number of data points (i.e., collocation points) and unrelated to the number of control coefficients. [14]
In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is The parameter is the mean or expectation of the distribution, while the parameter is the variance. The standard deviation of the distribution is (sigma). A random variable with a Gaussian distribution is said to be normally distributed, and is called a normal deviate.
In mathematical physics and mathematics, the Pauli matrices are a set of three 2 × 2 complex matrices that are traceless, Hermitian, involutory and unitary. Usually indicated by the Greek letter sigma, they are occasionally denoted by tau when used in connection with isospin symmetries.
In the mathematical field of differential geometry, the Riemann curvature tensor or Riemann–Christoffel tensor is the most common way used to express the curvature of Riemannian manifolds. It assigns a tensor to each point of a Riemannian manifold. It is a local invariant of Riemannian metrics which measures the failure of the second covariant derivatives to commute. A Riemannian manifold has zero curvature if and only if it is flat, i.e. locally isometric to the Euclidean space. The curvature tensor can also be defined for any pseudo-Riemannian manifold, or indeed any manifold equipped with an affine connection.
Noether's theorem states that every continuous symmetry of the action of a physical system with conservative forces has a corresponding conservation law. This is the first of two theorems published by mathematician Emmy Noether in 1918. The action of a physical system is the integral over time of a Lagrangian function, from which the system's behavior can be determined by the principle of least action. This theorem only applies to continuous and smooth symmetries of physical space.
In special relativity, a four-vector is an object with four components, which transform in a specific way under Lorentz transformations. Specifically, a four-vector is an element of a four-dimensional vector space considered as a representation space of the standard representation of the Lorentz group, the representation. It differs from a Euclidean vector in how its magnitude is determined. The transformations that preserve this magnitude are the Lorentz transformations, which include spatial rotations and boosts.
In the special theory of relativity, four-force is a four-vector that replaces the classical force.
In mathematics, the Hodge star operator or Hodge star is a linear map defined on the exterior algebra of a finite-dimensional oriented vector space endowed with a nondegenerate symmetric bilinear form. Applying the operator to an element of the algebra produces the Hodge dual of the element. This map was introduced by W. V. D. Hodge.
A Newtonian fluid is a fluid in which the viscous stresses arising from its flow are at every point linearly correlated to the local strain rate — the rate of change of its deformation over time. Stresses are proportional to the rate of change of the fluid's velocity vector.
In mathematics, the covariant derivative is a way of specifying a derivative along tangent vectors of a manifold. Alternatively, the covariant derivative is a way of introducing and working with a connection on a manifold by means of a differential operator, to be contrasted with the approach given by a principal connection on the frame bundle – see affine connection. In the special case of a manifold isometrically embedded into a higher-dimensional Euclidean space, the covariant derivative can be viewed as the orthogonal projection of the Euclidean directional derivative onto the manifold's tangent space. In this case the Euclidean derivative is broken into two parts, the extrinsic normal component and the intrinsic covariant derivative component.
In Hamiltonian mechanics, a canonical transformation is a change of canonical coordinates (q, p) → that preserves the form of Hamilton's equations. This is sometimes known as form invariance. Although Hamilton's equations are preserved, it need not preserve the explicit form of the Hamiltonian itself. Canonical transformations are useful in their own right, and also form the basis for the Hamilton–Jacobi equations and Liouville's theorem.
In physics, the Hamilton–Jacobi equation, named after William Rowan Hamilton and Carl Gustav Jacob Jacobi, is an alternative formulation of classical mechanics, equivalent to other formulations such as Newton's laws of motion, Lagrangian mechanics and Hamiltonian mechanics.
In nuclear physics, the chiral model, introduced by Feza Gürsey in 1960, is a phenomenological model describing effective interactions of mesons in the chiral limit (where the masses of the quarks go to zero), but without necessarily mentioning quarks at all. It is a nonlinear sigma model with the principal homogeneous space of a Lie group as its target manifold. When the model was originally introduced, this Lie group was the SU(N), where N is the number of quark flavors. The Riemannian metric of the target manifold is given by a positive constant multiplied by the Killing form acting upon the Maurer–Cartan form of SU(N).
In differential geometry, the four-gradient is the four-vector analogue of the gradient from vector calculus.
An osculating circle is a circle that best approximates the curvature of a curve at a specific point. It is tangent to the curve at that point and has the same curvature as the curve at that point. The osculating circle provides a way to understand the local behavior of a curve and is commonly used in differential geometry and calculus.
A theoretical motivation for general relativity, including the motivation for the geodesic equation and the Einstein field equation, can be obtained from special relativity by examining the dynamics of particles in circular orbits about the Earth. A key advantage in examining circular orbits is that it is possible to know the solution of the Einstein Field Equation a priori. This provides a means to inform and verify the formalism.
In many-body theory, the term Green's function is sometimes used interchangeably with correlation function, but refers specifically to correlators of field operators or creation and annihilation operators.
In the differential geometry of surfaces, a Darboux frame is a natural moving frame constructed on a surface. It is the analog of the Frenet–Serret frame as applied to surface geometry. A Darboux frame exists at any non-umbilic point of a surface embedded in Euclidean space. It is named after French mathematician Jean Gaston Darboux.
In theoretical physics, relativistic Lagrangian mechanics is Lagrangian mechanics applied in the context of special relativity and general relativity.
The optical metric was defined by German theoretical physicist Walter Gordon in 1923 to study the geometrical optics in curved space-time filled with moving dielectric materials.
Tau functions are an important ingredient in the modern mathematical theory of integrable systems, and have numerous applications in a variety of other domains. They were originally introduced by Ryogo Hirota in his direct method approach to soliton equations, based on expressing them in an equivalent bilinear form.
This article needs additional or more specific categories .(August 2024) |