This article has multiple issues. Please help improve it or discuss these issues on the talk page . (Learn how and when to remove these messages)
|
In mathematics, the progressive-iterative approximation method is an iterative method of data fitting with geometric meanings. [1] Given a set of data points to be fitted, the method obtains a series of fitting curves (or surfaces) by iteratively updating the control points, and the limit curve (surface) can interpolate or approximate the given data points. [2] It avoids solving a linear system of equations directly and allows flexibility in adding constraints during the iterative process. [3] Therefore, it has been widely used in geometric design and related fields. [2]
The study of the iterative method with geometric meaning can be traced back to the work of scholars such as Dongxu Qi and Carl de Boor in the 1970s. [4] [5] In 1975, Qi et al. developed and proved the "profit and loss" algorithm for uniform cubic B-spline curves, [4] and in 1979, de Boor independently proposed this algorithm. [5] In 2004, Hongwei Lin and coauthors proved that non-uniform cubic B-spline curves and surfaces have the "profit and loss" property. [3] Later, in 2005, Lin et al. proved that the curves and surfaces with normalized and totally positive basis all have this property and named it progressive iterative approximation (PIA). [1] In 2007, Maekawa et al. changed the algebraic distance in PIA to geometric distance and named it geometric interpolation (GI). [6] In 2008, Cheng et al. extended it to subdivision surfaces and named the method progressive interpolation (PI). [7] Since the iteration steps of the PIA, GI, and PI algorithms are similar and all have geometric meanings, they are collectively referred to as geometric iterative methods (GIM). [2]
PIA is now extended to several common curves and surfaces in the geometric design field, [8] including NURBS curves and surfaces, [9] T-spline surfaces, [10] and implicit curves and surfaces. [11]
Generally, progressive-iterative approximation (PIA) can be divided into interpolation and approximation schemes. [2] In interpolation algorithms, the number of control points is equal to that of the data points; in approximation algorithms, the number of control points can be less than that of the data points. Specifically, there are some representative iteration methods—such as local-PIA, [12] implicit-PIA, [11] fairing-PIA, [13] and isogeometric least-squares progressive-iterative approximation (IG-LSPIA) [14] —that are specialized for solving the isogeometric analysis problem. [15]
In interpolation algorithms of PIA, [1] [3] [9] [16] every data point is used as a control point. To facilitate the description of the PIA iteration format for different forms of curves and surfaces, the following formula is uniformly used: For example:
Additionally, this can be applied to NURBS curves and surfaces, T-spline surfaces, and triangular Bernstein–Bézier surfaces. [18]
Given an ordered data set with parameters satisfying for , the initial fitting curve is: [1] where the initial control points of the initial fitting curve can be randomly selected. Suppose that after the th iteration, the th fitting curve is generated by
1 |
To construct the st curve, we first calculate the difference vectors, and use them to update the control points by which leads to the st fitting curve: In this way, we obtain a sequence of curves , which converges to a limit curve that interpolates the give data points, [1] [9] i.e.,
For the B-spline curve and surface fitting problem, Deng and Lin proposed a least-squares progressive–iterative approximation (LSPIA), [10] [19] which allows the number of control points to be less than the number of the data points and is more suitable for large-scale data fitting problems. [10]
Assume there exists data points and control points, where . Start with equation ( 1 ), which gives the th fitting curve as To generate the th fitting curve, first compute the difference vectors for the data points [10] [19] and then the difference vectors for the control points where is the index set of the data points in the th group, whose parameters fall in the local support of the th basis function, i.e., . The are weights that guarantee the convergence of the algorithm, usually taken as .
Finally, the control points of the th curve are updated by leading to the th fitting curve . In this way, we obtain a sequence of curve, and the limit curve converges to the least-squares fitting result to the given data points. [10] [19]
In the local-PIA method, [12] the control points are divided into active and fixed control points, whose subscripts are denoted as and , respectively. Assume that, the th fitting curve is , where the fixed control points satisfy Then, on the one hand, the iterative formula of the difference vector corresponding to the fixed control points is On the other hand, the iterative formula of the difference vector corresponding to the active control points is Arranging the above difference vectors into a one-dimensional sequence, the local iteration format in matrix form is, where is the iteration matrix: where and are the identity matrices and The above local iteration format converges and can be extended to blending surfaces [12] and subdivision surfaces. [20]
The PIA format for implicit curve and surface reconstruction is presented in the following. [11] Given an ordered point cloud and a unit normal vector on the data points, we want to reconstruct an implicit curve from the given point cloud. To avoid a trivial solution, some offset points are added to the point cloud. [11] They are offset by a distance along the unit normal vector of each point Assume that is the value of the implicit function at the offset point Let the implicit curve after the th iteration be where is the control point.
Define the difference vector of data points as [11] Next, calculate the difference vector of control coefficients where is the convergence coefficient. As a result, the new control coefficients are leading to the new algebraic B-spline curve The above procedure is carried out iteratively to generate a sequence of algebraic B-spline functions . The sequence converges to a minimization problem with constraints when the initial control coefficients . [11]
Assume that the implicit surface generated after the th iteration is the iteration format is similar to that of the curve case. [11] [21]
To develop fairing-PIA, we first define the functionals as follows: [13] where represents the th derivative of the basis function , [8] (e.g. B-spline basis function).
Let the curve after the th iteration be To construct the new curve , we first calculate the st difference vectors for data points, [13] Then, the fitting difference vectors and the fairing vectors for control points are calculated by [13] Finally, the control points of the st curve are produced by [13] where is a normalization weight, and is a smoothing weight corresponding to the th control point. The smoothing weights can be employed to adjust the smoothness individually, thus bringing great flexibility for smoothness. [13] The larger the smoothing weight is, the smoother the generated curve is. The new curve is obtained as follows In this way, we obtain a sequence of curves . The sequence converges to the solution of the conventional fairing method based on energy minimization when all smoothing weights are equal (). [13] Similarly, the fairing-PIA can be extended to the surface case.
Isogeometric least-squares progressive-iterative approximation (IG-LSPIA). [14] Given a boundary value problem [15] where is the unknown solution, is the differential operator, is the boundary operator, and and are the continuous functions. In the isogeometric analysis method, NURBS basis functions [8] are used as shape functions to solve the numerical solution of this boundary value problem. [15] The same basis functions are applied to represent the numerical solution and the geometric mapping : where denotes the NURBS basis function, is the control coefficient. After substituting the collocation points [22] into the strong form of PDE, we obtain a discretized problem [22] where and denote the subscripts of internal and boundary collocation points, respectively.
Arranging the control coefficients of the numerical solution into an -dimensional column vector , the discretized problem can be reformulated in matrix form as where is the collocation matrix and is the load vector.
Assume that the discretized load values are data points to be fitted. Given the initial guess of the control coefficients , we obtain an initial blending function [14] where , , represents the combination of different order derivatives of the NURBS basis functions determined using the operators and where and indicate the interior and boundary of the parameter domain, respectively. Each corresponds to the th control coefficient. Assume that and are the index sets of the internal and boundary control coefficients, respectively. Without loss of generality, we further assume that the boundary control coefficients have been obtained using strong or weak imposition and are fixed, i.e., The th blending function, generated after the th iteration of IG-LSPIA, [14] is assumed to be as follows: Then, the difference vectors for collocation points (DCP) in the st iteration are obtained using Moreover, group all load values whose parameters fall in the local support of the th derivatives function, i.e., , into the th group corresponding to the th control coefficient, and denote the index set of the th group of load values as . Lastly, the differences for control coefficients (DCC) can be constructed as follows: [14] where is a normalization weight to guarantee the convergence of the algorithm.
Thus, the new control coefficients are updated via the following formula, Consequently, the st blending function is generated as follows: The above iteration process is performed until the desired fitting precision is reached and a sequence of blending functions is obtained The IG-LSPIA converges to the solution of a constrained least-squares collocation problem. [14]
Let n be the number of control points and m be the number of data points.
If , the PIA iterative format in matrix form is [1] where The convergence of the PIA is related to the properties of the collocation matrix. If the spectral radius of the iteration matrix is less than , then the PIA is convergent. It has been shown that the PIA methods are convergent for Bézier curves and surfaces, B-spline curves and surfaces, NURBS curves and surfaces, triangular Bernstein–Bézier surfaces, and subdivision surfaces (Loop, Catmull-Clark, Doo-Sabin). [2]
If , the LSPIA in matrix form is [10] [19] When the matrix is nonsingular, the following results can be obtained: [23]
Lemma — If , where is the largest eigenvalue of the matrix , then the eigenvalues of are real numbers and satisfy .
Proof Since is nonsingular, and , then . Moreover, In summary, .
Theorem — If , LSPIA is convergent, and converges to the least-squares fitting result to the given data points. [10] [19]
Proof From the matrix form of iterative format, we obtain the following: According to above Lemma, the spectral radius of the matrix satisfies and thus the spectral radius of the iteration matrix satisfies When As a result, i.e., , which is equivalent to the normal equation of the fitting problem. Hence, the LSPIA algorithm converges to the least squares result for a given sequence of points.
Lin et al. showed that LSPIA converges even when the iteration matrix is singular. [18]
Since PIA has obvious geometric meaning, constraints can be easily integrated in the iterations. Currently, PIA has been widely applied in many fields, such as data fitting, reverse engineering, geometric design, mesh generation, data compression, fairing curve and surface generation, and isogeometric analysis.
For implicit curve and surface reconstruction, PIA avoids the additional zero level set and regularization term, which greatly improves the speed of the reconstruction algorithm. [11]
Firstly, the data points are sampled on the original curve. Then, the initial polynomial approximation curve or rational approximation curve of the offset curve is generated from these sampled points. Finally, the offset curve is approximated iteratively using the PIA method. [33]
Given a triangular mesh model as input, the algorithm first constructs the initial hexahedral mesh, then extracts the quadrilateral mesh of the surface as the initial boundary mesh. During the iterations, the movement of each mesh vertex is constrained to ensure the validity of the mesh. Finally, the hexahedral model is fitted to the given input model. The algorithm can guarantee the validity of the generated hexahedral mesh, i.e., the Jacobi value at each mesh vertex is greater than zero. [34]
First, the image data are converted into a one-dimensional sequence by Hilbert scan. Then, these data points are fitted by LSPIA to generate a Hilbert curve. Finally, the Hilbert curve is sampled, and the compressed image can be reconstructed. This method can well preserve the neighborhood information of pixels. [35]
Given a data point set, we first define the fairing functional, and calculate the fitting difference vector and the fairing vector of the control point; then, adjust the control points with fairing weights. According to the above steps, the fairing curve and surface can be generated iteratively. Due to the sufficient fairing parameters, the method can achieve global or local fairing. It is also flexible to adjust knot vectors, fairing weights, or data parameterization after each round of iteration. The traditional energy-minimization method is a special case of this method, i.e., when the smooth weights are all the same. [13]
The discretized load values are regarded as the set of data points, and the combination of the basis functions and their derivative functions is used as the blending function for fitting. The method automatically adjusts the degrees of freedom of the numerical solution of the partial differential equation according to the fitting result of the blending function to the load values. In addition, the average iteration time per step is only related to the number of data points (i.e., collocation points) and unrelated to the number of control coefficients. [14]
In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is
In mathematical physics and mathematics, the Pauli matrices are a set of three 2 × 2 complex matrices that are traceless, Hermitian, involutory and unitary. Usually indicated by the Greek letter sigma, they are occasionally denoted by tau when used in connection with isospin symmetries.
Noether's theorem states that every continuous symmetry of the action of a physical system with conservative forces has a corresponding conservation law. This is the first of two theorems published by mathematician Emmy Noether in 1918. The action of a physical system is the integral over time of a Lagrangian function, from which the system's behavior can be determined by the principle of least action. This theorem only applies to continuous and smooth symmetries of physical space.
In special relativity, four-momentum (also called momentum–energy or momenergy) is the generalization of the classical three-dimensional momentum to four-dimensional spacetime. Momentum is a vector in three dimensions; similarly four-momentum is a four-vector in spacetime. The contravariant four-momentum of a particle with relativistic energy E and three-momentum p = (px, py, pz) = γmv, where v is the particle's three-velocity and γ the Lorentz factor, is
In special relativity, a four-vector is an object with four components, which transform in a specific way under Lorentz transformations. Specifically, a four-vector is an element of a four-dimensional vector space considered as a representation space of the standard representation of the Lorentz group, the representation. It differs from a Euclidean vector in how its magnitude is determined. The transformations that preserve this magnitude are the Lorentz transformations, which include spatial rotations and boosts.
In the special theory of relativity, four-force is a four-vector that replaces the classical force.
In mathematics, the Hodge star operator or Hodge star is a linear map defined on the exterior algebra of a finite-dimensional oriented vector space endowed with a nondegenerate symmetric bilinear form. Applying the operator to an element of the algebra produces the Hodge dual of the element. This map was introduced by W. V. D. Hodge.
A Newtonian fluid is a fluid in which the viscous stresses arising from its flow are at every point linearly correlated to the local strain rate — the rate of change of its deformation over time. Stresses are proportional to the rate of change of the fluid's velocity vector.
In Hamiltonian mechanics, a canonical transformation is a change of canonical coordinates (q, p) → that preserves the form of Hamilton's equations. This is sometimes known as form invariance. Although Hamilton's equations are preserved, it need not preserve the explicit form of the Hamiltonian itself. Canonical transformations are useful in their own right, and also form the basis for the Hamilton–Jacobi equations and Liouville's theorem.
In physics, the Hamilton–Jacobi equation, named after William Rowan Hamilton and Carl Gustav Jacob Jacobi, is an alternative formulation of classical mechanics, equivalent to other formulations such as Newton's laws of motion, Lagrangian mechanics and Hamiltonian mechanics.
In nuclear physics, the chiral model, introduced by Feza Gürsey in 1960, is a phenomenological model describing effective interactions of mesons in the chiral limit (where the masses of the quarks go to zero), but without necessarily mentioning quarks at all. It is a nonlinear sigma model with the principal homogeneous space of a Lie group as its target manifold. When the model was originally introduced, this Lie group was the SU(N), where N is the number of quark flavors. The Riemannian metric of the target manifold is given by a positive constant multiplied by the Killing form acting upon the Maurer–Cartan form of SU(N).
In differential geometry, the four-gradient is the four-vector analogue of the gradient from vector calculus.
An osculating circle is a circle that best approximates the curvature of a curve at a specific point. It is tangent to the curve at that point and has the same curvature as the curve at that point. The osculating circle provides a way to understand the local behavior of a curve and is commonly used in differential geometry and calculus.
A theoretical motivation for general relativity, including the motivation for the geodesic equation and the Einstein field equation, can be obtained from special relativity by examining the dynamics of particles in circular orbits about the Earth. A key advantage in examining circular orbits is that it is possible to know the solution of the Einstein Field Equation a priori. This provides a means to inform and verify the formalism.
The covariant formulation of classical electromagnetism refers to ways of writing the laws of classical electromagnetism in a form that is manifestly invariant under Lorentz transformations, in the formalism of special relativity using rectilinear inertial coordinate systems. These expressions both make it simple to prove that the laws of classical electromagnetism take the same form in any inertial coordinate system, and also provide a way to translate the fields and forces from one frame to another. However, this is not as general as Maxwell's equations in curved spacetime or non-rectilinear coordinate systems.
In the differential geometry of surfaces, a Darboux frame is a natural moving frame constructed on a surface. It is the analog of the Frenet–Serret frame as applied to surface geometry. A Darboux frame exists at any non-umbilic point of a surface embedded in Euclidean space. It is named after French mathematician Jean Gaston Darboux.
In theoretical physics, relativistic Lagrangian mechanics is Lagrangian mechanics applied in the context of special relativity and general relativity.
The optical metric was defined by German theoretical physicist Walter Gordon in 1923 to study the geometrical optics in curved space-time filled with moving dielectric materials.
The shear viscosity of a fluid is a material property that describes the friction between internal neighboring fluid surfaces flowing with different fluid velocities. This friction is the effect of (linear) momentum exchange caused by molecules with sufficient energy to move between these fluid sheets due to fluctuations in their motion. The viscosity is not a material constant, but a material property that depends on temperature, pressure, fluid mixture composition, local velocity variations. This functional relationship is described by a mathematical viscosity model called a constitutive equation which is usually far more complex than the defining equation of shear viscosity. One such complicating feature is the relation between the viscosity model for a pure fluid and the model for a fluid mixture which is called mixing rules. When scientists and engineers use new arguments or theories to develop a new viscosity model, instead of improving the reigning model, it may lead to the first model in a new class of models. This article will display one or two representative models for different classes of viscosity models, and these classes are:
Tau functions are an important ingredient in the modern mathematical theory of integrable systems, and have numerous applications in a variety of other domains. They were originally introduced by Ryogo Hirota in his direct method approach to soliton equations, based on expressing them in an equivalent bilinear form.
This article needs additional or more specific categories .(August 2024) |