YaDICs

Last updated
YaDICs
Original author(s) Coudert Sébastien, Seghir Rian, Witz Jean-françois
Initial releaseJanuary 2012;12 years ago (2012-01)
Stable release
v04.14a / May 27, 2015;8 years ago (2015-05-27)
Repository none OOjs UI icon edit-ltr-progressive.svg
Written in C++
Operating system Linux
Size 18.4MB
Type Image processing
License GPLv2 or later
Website yadics.univ-lille1.fr/wordpress/

YaDICs is a program written to perform digital image correlation on 2D and 3D tomographic images. The program was designed to be both modular, by its plugin strategy and efficient, by it multithreading strategy. It incorporates different transformations (Global, Elastic, Local), optimizing strategy (Gauss-Newton, Steepest descent), Global and/or local shape functions (Rigid-body motions, homogeneous dilatations, flexural and Brazilian test models)...

Contents

Theoretical background

Context

In solid mechanics, digital image correlation is a tool that allows to identify the displacement field to register a reference image (called herein fixed image) to images during an experiment (mobile image). For example, it is possible to observe the face of a specimen with a painted speckle on it in order to determine its displacement fields during a tensile test. Before the appearance of such methods, researchers usually used strain gauges to measure the mechanical state of the material but strain gauges only measure the strain on a point and don't allow to understand material with an heterogeneous behavior. One can obtain a full in plane strain tensor by derivation of the displacement fields. Many methods are based upon the optical flow.

In fluid mechanics a similar method is used, called Particle Image Velocimetry (PIV); the algorithms are similar to those of DIC but it is impossible to ensure that the optical flow is conserved so a vast majority of the software used the normalized cross correlation metric.

In mechanics the displacement or velocity fields are the only concern, registering images is just a side effect. There is another process called image registration using the same algorithms (on monomodal images) but where the goal is to register images and thereby identifying the displacement field is just a side effect.

YaDICs uses the general principle of image registration with a particular attention to the displacement fields basis.

Image registration principle

YaDICs can be explained using the classical image registration framework: [1]

Image registration general scheme

The common idea of image registration and digital image correlation is to find the transformation between a fixed image and a moving one for a given metric using an optimization scheme. While there are many methods to achieve such a goal, Yadics focuses on registering images with the same modality. The idea behind the creation of this software is to be able to process data that comes from a μ-tomograph; i.e.: data cube over 10003 voxels. With such a size it is not possible to use naive approach usually used in a two-dimensional context. In order to get sufficient performances OpenMP parallelism is used and data are not globally stored in memory. As an extensive description of the different algorithms is given in. [1]

Sampling

Contrary to image registration, Digital Image Correlation targets the transformation, one wants to extracted the most accurate transformation from the two images and not just match the images. Yadics uses the whole image as a sampling grid: it is thus a total sampling.

Interpolator

It is possible to choose between bilinear interpolation and bicubic interpolation for the grey level evaluation at non integer coordinates. The bi-cubic interpolation is the recommended one.

Metrics

Sum of squared differences (SSD)

The SSD is also known as mean squared error. The equation below defines the SSD metric:

where is the fixed image, the moving one, the integration area the number of pi(vo)xels (cardinal) and the transformation parametrized by μ

The transformation can be written as:

This metric is the main one used in the YaDICs as it works well with same modality images. One has to find the minimum of this metric

Normalized cross-correlation

The normalized cross-correlation (NCC) is used when one cannot assure the optical flow conservation; it happens in case of change of lighting or if particles disappear from the scene can occur in particle images velocimetry (PIV).

The NCC is defined by:

where and are the mean values of the fixed and mobile images.

This metric is only used to find local translation in Yadics. This metric with translation transform can be solved using cross-correlation methods, which are non iterative and can be accelerated using Fast Fourier Transform .

Classification of transformations

There are three categories of parametrization: elastic, global and local transformation. The elastic transformations respect the partition of unity, there are no holes created or surfaces counted several times. This is commonly used in Image Registration by the use of B-Spline functions [1] [2] and in solid mechanics with finite element basis. [3] [4] The global transformations are defined on the whole picture using rigid body or affine transformation (which is equivalent to homogeneous strain transformation). More complex transformations can be defined such as mechanically based one. These transformations have been used for stress intensity factor identification by [5] [6] and for rod strain by. [7] The local transformation can be considered as the same global transformation defined on several Zone Of Interest (ZOI) of the fixed image.

Global

Several global transforms have been implemented:

  • Rigid and homogeneous (Tx,Ty,Rz in 2D; Tx,Ty,Tz,Rx,Ry,Rz,Exx,Eyy,Ezz,Eyz,Exz,Exy in 3D)
  • Brazilian [8] (Only in 2D),
  • Dynamic Flexion,

Elastic

First-order quadrangular finite elements Q4P1 are used in Yadics.

Local

Every global transform can be used on a local mesh.

Optimization

The YaDICs optimization process follows a gradient descent scheme.

The first step is to compute the gradient of the metric regarding the transform parameters

Gradient method

Once the metric gradient has been computed, one has to find an optimization strategy

The gradient method principle is explained below:

The gradient step can be constant or updated at every iteration. , allows one to choose between the following methods :

  • steepest descent,
  • Gauss-Newton.

Many different methods exist (e.g. BFGS, conjugate gradient, stochastic gradient) but as steepest gradient and Gauss-Newton are the only ones implemented in Yadics these methods are not discussed here.

The Gauss-Newton method is a very efficient method that needs to solve a [M]{U}={F}. On 10003 voxels μ-tomographic image the number of degrees of freedom can reach 1e6 (i.e: on a 12×12×12 mesh), dealing with such a problem is more a matter of numerical scientists and required specific development (using libraries like Petsc or MUMPS) so we don't use Gauss-Newton methods to solve such problems. One has developed a specific steepest gradient algorithm with a specific tuning of the αk scalar parameter at each iteration. The Gauss-Newton method can be used in small problems in 2D.

Pyramidal filter

None of these optimization methods can succeed directly if applied at the last scale as the gradient methods are sensitive to the initial guests. In order to find a global optimum one has to evaluate the transformation on a filtered image. The figure below illustrates how to use the pyramidal filter to find the transformation. [9]

Pyramidal process used in Yadics (and ITK).

Regularization

The metrics is often called image energy; people usually add energy that comes from mechanics assumptions as the Laplacian of displacement (a special case of Tikhonov regularization [10] ) or even finite element problems. As one decided not to solve the Gauss-Newton problem for most of cases this solution is far from being CPU efficient. Cachier et al. [11] demonstrated that the problem of minimizing image and mechanical energy can be reformulated in solving the energy image then applying a Gaussian filter at each iteration. We use this strategy in Yadics and we add the median filter as it is massively used in PIV. One notes that the median filter avoids local minima while preserving discontinuities. The filtering process is illustrated in the figure below :

See also

Related Research Articles

<span class="mw-page-title-main">Quantum field theory</span> Theoretical framework

In theoretical physics, quantum field theory (QFT) is a theoretical framework that combines classical field theory, special relativity, and quantum mechanics. QFT is used in particle physics to construct physical models of subatomic particles and in condensed matter physics to construct models of quasiparticles. The current standard model of particle physics is based on quantum field theory.

In particle physics, the Dirac equation is a relativistic wave equation derived by British physicist Paul Dirac in 1928. In its free form, or including electromagnetic interactions, it describes all spin-12 massive particles, called "Dirac particles", such as electrons and quarks for which parity is a symmetry. It is consistent with both the principles of quantum mechanics and the theory of special relativity, and was the first theory to account fully for special relativity in the context of quantum mechanics. It was validated by accounting for the fine structure of the hydrogen spectrum in a completely rigorous way.

In mathematical statistics, the Fisher information is a way of measuring the amount of information that an observable random variable X carries about an unknown parameter θ of a distribution that models X. Formally, it is the variance of the score, or the expected value of the observed information.

In mathematics, the total variation identifies several slightly different concepts, related to the (local or global) structure of the codomain of a function or a measure. For a real-valued continuous function f, defined on an interval [a, b] ⊂ R, its total variation on the interval of definition is a measure of the one-dimensional arclength of the curve with parametric equation xf(x), for x ∈ [a, b]. Functions whose total variation is finite are called functions of bounded variation.

In physics, the Polyakov action is an action of the two-dimensional conformal field theory describing the worldsheet of a string in string theory. It was introduced by Stanley Deser and Bruno Zumino and independently by L. Brink, P. Di Vecchia and P. S. Howe in 1976, and has become associated with Alexander Polyakov after he made use of it in quantizing the string in 1981. The action reads:

In differential geometry, the four-gradient is the four-vector analogue of the gradient from vector calculus.

<span class="mw-page-title-main">Magnetic force microscope</span>

Magnetic force microscopy (MFM) is a variety of atomic force microscopy, in which a sharp magnetized tip scans a magnetic sample; the tip-sample magnetic interactions are detected and used to reconstruct the magnetic structure of the sample surface. Many kinds of magnetic interactions are measured by MFM, including magnetic dipole–dipole interaction. MFM scanning often uses non-contact atomic force microscopy (NC-AFM) and is considered to be non-destructive with respect to the test sample. In MFM, the test sample(s) do not need to be electrically conductive to be imaged.

In theoretical physics, a scalar–tensor theory is a field theory that includes both a scalar field and a tensor field to represent a certain interaction. For example, the Brans–Dicke theory of gravitation uses both a scalar field and a tensor field to mediate the gravitational interaction.

A theoretical motivation for general relativity, including the motivation for the geodesic equation and the Einstein field equation, can be obtained from special relativity by examining the dynamics of particles in circular orbits about the Earth. A key advantage in examining circular orbits is that it is possible to know the solution of the Einstein Field Equation a priori. This provides a means to inform and verify the formalism.

In probability theory, random element is a generalization of the concept of random variable to more complicated spaces than the simple real line. The concept was introduced by Maurice Fréchet who commented that the “development of probability theory and expansion of area of its applications have led to necessity to pass from schemes where (random) outcomes of experiments can be described by number or a finite set of numbers, to schemes where outcomes of experiments represent, for example, vectors, functions, processes, fields, series, transformations, and also sets or collections of sets.”

In probability theory, a random measure is a measure-valued random element. Random measures are for example used in the theory of random processes, where they form many important point processes such as Poisson point processes and Cox processes.

Digital image correlation and tracking is an optical method that employs tracking and image registration techniques for accurate 2D and 3D measurements of changes in images. This method is often used to measure full-field displacement and strains, and it is widely applied in many areas of science and engineering. Compared to strain gauges and extensometers, digital image correlation methods provide finer details about deformation, due to the ability to provide both local and average data.

In continuum mechanics, a compatible deformation tensor field in a body is that unique tensor field that is obtained when the body is subjected to a continuous, single-valued, displacement field. Compatibility is the study of the conditions under which such a displacement field can be guaranteed. Compatibility conditions are particular cases of integrability conditions and were first derived for linear elasticity by Barré de Saint-Venant in 1864 and proved rigorously by Beltrami in 1886.

In mathematical physics, the Belinfante–Rosenfeld tensor is a modification of the stress–energy tensor that is constructed from the canonical stress–energy tensor and the spin current so as to be symmetric yet still conserved.

Gauge theory gravity (GTG) is a theory of gravitation cast in the mathematical language of geometric algebra. To those familiar with general relativity, it is highly reminiscent of the tetrad formalism although there are significant conceptual differences. Most notably, the background in GTG is flat, Minkowski spacetime. The equivalence principle is not assumed, but instead follows from the fact that the gauge covariant derivative is minimally coupled. As in general relativity, equations structurally identical to the Einstein field equations are derivable from a variational principle. A spin tensor can also be supported in a manner similar to Einstein–Cartan–Sciama–Kibble theory. GTG was first proposed by Lasenby, Doran, and Gull in 1998 as a fulfillment of partial results presented in 1993. The theory has not been widely adopted by the rest of the physics community, who have mostly opted for differential geometry approaches like that of the related gauge gravitation theory.

Generalized filtering is a generic Bayesian filtering scheme for nonlinear state-space models. It is based on a variational principle of least action, formulated in generalized coordinates of motion. Note that "generalized coordinates of motion" are related to—but distinct from—generalized coordinates as used in (multibody) dynamical systems analysis. Generalized filtering furnishes posterior densities over hidden states generating observed data using a generalized gradient descent on variational free energy, under the Laplace assumption. Unlike classical filtering, generalized filtering eschews Markovian assumptions about random fluctuations. Furthermore, it operates online, assimilating data to approximate the posterior density over unknown quantities, without the need for a backward pass. Special cases include variational filtering, dynamic expectation maximization and generalized predictive coding.

<span class="mw-page-title-main">Loop representation in gauge theories and quantum gravity</span> Description of gauge theories using loop operators

Attempts have been made to describe gauge theories in terms of extended objects such as Wilson loops and holonomies. The loop representation is a quantum hamiltonian representation of gauge theories in terms of loops. The aim of the loop representation in the context of Yang–Mills theories is to avoid the redundancy introduced by Gauss gauge symmetries allowing to work directly in the space of physical states. The idea is well known in the context of lattice Yang–Mills theory. Attempts to explore the continuous loop representation was made by Gambini and Trias for canonical Yang–Mills theory, however there were difficulties as they represented singular objects. As we shall see the loop formalism goes far beyond a simple gauge invariant description, in fact it is the natural geometrical framework to treat gauge theories and quantum gravity in terms of their fundamental physical excitations.

The Wasserstein Generative Adversarial Network (WGAN) is a variant of generative adversarial network (GAN) proposed in 2017 that aims to "improve the stability of learning, get rid of problems like mode collapse, and provide meaningful learning curves useful for debugging and hyperparameter searches".

In theoretical physics, more specifically in quantum field theory and supersymmetry, supersymmetric Yang–Mills, also known as super Yang–Mills and abbreviated to SYM, is a supersymmetric generalization of Yang–Mills theory, which is a gauge theory that plays an important part in the mathematical formulation of forces in particle physics.

Distributional data analysis is a branch of nonparametric statistics that is related to functional data analysis. It is concerned with random objects that are probability distributions, i.e., the statistical analysis of samples of random distributions where each atom of a sample is a distribution. One of the main challenges in distributional data analysis is that the space of probability distributions is, while a convex space, is not a vector space.

References

  1. 1 2 3 S. Klein, M. Staring, K. Murphy, M. A. Viergever, and J. P. W. Pluim, "Elastix: a toolbox for intensity-based medical image registration," Medical imaging, IEEE transactions on, vol. 29, issue 1, pp. 196–205, 2010
  2. J. Réthoré, T. Elguedj, P. Simon, and M. Correct, "On the use of nurbs functions for displacement derivatives measurement by digital image correlation," Experimental mechanics, vol. 50, iss. 7, pp. 1099–1116, 2010.
  3. G. Besnard, F. Hild, and S. Roux, "Finite-element displacement fields analysis from digital images: application to portevin-le châtelier bands," Experimental mechanics, vol. 46, iss. 6, pp. 789–803, 2006.
  4. J. Réthoré, S. Roux, and F. Hild, "From pictures to extended finite elements: extended digital image correlation (x-dic)," Comptes rendus mécanique, vol. 335, iss. 3, pp. 131–137, 2007.
  5. R. Hamam, F. Hild, and S. Roux, "Stress intensity factor gauging by digital image correlation: application in cyclic fatigue," Strain, vol. 43, iss. 3, pp. 181–192, 2007.
  6. F. Hild and S. Roux, "Measuring stress intensity factors with a camera: integrated digital image correlation (i-dic)," Comptes rendus mécanique, vol. 334, iss. 1, pp. 8–12, 2006.
  7. F. Hild, S. Roux, N. Guerrero, M. Marante, and J. Flórez-Llópez, "Calibration of constitutive models of steel beams subject to local buckling by using digital image correlation," European journal of mechanics - a/solids, vol. 30, iss. 1, pp. 1–10, 2011.
  8. F. Hild and S. Roux, "Digital image correlation: from displacement measurement to identification of elastic properties ? a review," Strain, vol. 42, iss. 2, pp. 69–80, 2006.
  9. T. S. Yoo, M. J. Ackerman, W. E. Lorensen, W. Schroeder, V. Chalana, S. Aylward, Dimitris Metaxas, and R. Whitaker, "Engineering and algorithm design for an image processing api: a technical report on itk - the insight toolkit", pp. 586–592, 2002.
  10. A. N. Tikhonov and V. B. Glasko, "Use of the regularization method in non-linear problems," \USSR\ computational mathematics and mathematical physics, vol. 5, iss. 3, pp. 93–107, 1965.
  11. P. Cachier, E. Bardinet, D. Dormont, X. Pennec, and N. Ayache, "Iconic feature based nonrigid registration: the \PASHA\ algorithm," Computer vision and image understanding, vol. 89, issue 2?3, pp. 272–298, 2003.