Gradient pattern analysis

Last updated

Gradient pattern analysis (GPA) [1] is a geometric computing method for characterizing geometrical bilateral symmetry breaking of an ensemble of symmetric vectors regularly distributed in a square lattice. Usually, the lattice of vectors represent the first-order gradient of a scalar field, here an M x M square amplitude matrix. An important property of the gradient representation is the following: A given M x M matrix where all amplitudes are different results in an M x M gradient lattice containing asymmetric vectors. As each vector can be characterized by its norm and phase, variations in the amplitudes can modify the respective gradient pattern.

Symmetry breaking Physical process transitioning a system from a symmetric state to a more ordered state

In physics, symmetry breaking is a phenomenon in which (infinitesimally) small fluctuations acting on a system crossing a critical point decide the system's fate, by determining which branch of a bifurcation is taken. To an outside observer unaware of the fluctuations, the choice will appear arbitrary. This process is called symmetry "breaking", because such transitions usually bring the system from a symmetric but disorderly state into one or more definite states. Symmetry breaking is thought to play a major role in pattern formation.

Gradient multi-variable generalization of the derivative

In vector calculus, the gradient is a multi-variable generalization of the derivative. While a derivative can be defined on functions of a single variable, for functions of several variables, the gradient takes its place. The gradient is a vector-valued function, as opposed to a derivative, which is a scalar-valued function.

Matrix (mathematics) Two-dimensional array of numbers with specific operations

In mathematics, a matrix is a rectangular array of numbers, symbols, or expressions, arranged in rows and columns. For example, the dimensions of the matrix below are 2 × 3, because there are two rows and three columns:

Contents

The original concept of GPA was introduced by Rosa, Sharma and Valdivia in 1999. [2] Usually GPA is applied for spatio-temporal pattern analysis in physics and environmental sciences operating on time-series and digital images.

Calculation

By connecting all vectors using a Delaunay triangulation criterion it is possible to characterize gradient asymmetries computing the so-called gradient asymmetry coefficient, that has been defined as: , where is the total number of asymmetric vectors, is the number of Delaunay connections among them and the property is valid for any gradient square lattice.

Delaunay triangulation

In mathematics and computational geometry, a Delaunay triangulation for a given set P of discrete points in a plane is a triangulation DT(P) such that no point in P is inside the circumcircle of any triangle in DT(P). Delaunay triangulations maximize the minimum angle of all the angles of the triangles in the triangulation; they tend to avoid sliver triangles. The triangulation is named after Boris Delaunay for his work on this topic from 1934.

As the asymmetry coefficient is very sensitive to small changes in the phase and modulus of each gradient vector, it can distinguish complex variability patterns (bilateral asymmetry) even when they are very similar but consist of a very fine structural difference. Note that, unlike most of the statistical tools, the GPA does not rely on the statistical properties of the data but depends solely on the local symmetry properties of the correspondent gradient pattern.

For a complex extended pattern (matrix of amplitudes of a spatio-temporal pattern) composed by locally asymmetric fluctuations, is nonzero, defining different classes of irregular fluctuation patterns (1/f noise, chaotic, reactive-diffusive, etc.).

Besides other measurements (called gradient moments) can be calculated from the gradient lattice. [3] Considering the sets of local norms and phases as discrete compact groups, spatially distributed in a square lattice, the gradient moments have the basic property of being globally invariant (for rotation and modulation).

The primary research on gradient lattices applied to characterize weak wave turbulence from X-ray images of solar active regions was developed in the Department of Astronomy at University of Maryland, College Park, USA. A key line of research on GPA's algorithms and applications has been developed at Lab for Computing and Applied Mathematics (LAC) at National Institute for Space Research (INPE) in Brazil.

In continuum mechanics, wave turbulence is a set of nonlinear waves deviated far from thermal equilibrium. Such a state is usually accompanied by dissipation. It is either decaying turbulence or requires an external source of energy to sustain it. Examples are waves on a fluid surface excited by winds or ships, and waves in plasma excited by electromagnetic waves etc.

University of Maryland, College Park public research university in the city of College Park in Prince Georges County, Maryland

The University of Maryland, College Park is a public research university in College Park, Maryland. Founded in 1856, UMD is the flagship institution of the University System of Maryland, and is the largest university in both the state and the Washington metropolitan area, with more than 41,000 students representing all fifty states and 123 countries, and a global alumni network of over 360,000. Its twelve schools and colleges together offer over 200 degree-granting programs, including 92 undergraduate majors, 107 master's programs, and 83 doctoral programs. UMD is a member of the Association of American Universities and competes in intercollegiate athletics as a member of the Big Ten Conference.

National Institute for Space Research organization

The National Institute for Space Research is a research unit of the Brazilian Ministry of Science, Technology and Innovation, whose main goals are fostering scientific research and technological applications and qualifying personnel in the fields of space and atmospheric sciences, space engineering, and space technology. While INPE is the civilian research center for aerospace activities, the Brazilian Air Force's General Command for Aerospace Technology is the military arm. INPE is located in the city of São José dos Campos, São Paulo.

Relation to other methods

When GPA is conjugated with wavelet analysis, then the method is called Gradient spectral analysis (GSA), usually applied to short time series analysis. [4]

Related Research Articles

In linear algebra, the determinant is a scalar value that can be computed from the elements of a square matrix and encodes certain properties of the linear transformation described by the matrix. The determinant of a matrix A is denoted det(A), det A, or |A|. Geometrically, it can be viewed as the volume scaling factor of the linear transformation described by the matrix. This is also the signed volume of the n-dimensional parallelepiped spanned by the column or row vectors of the matrix. The determinant is positive or negative according to whether the linear mapping preserves or reverses the orientation of n-space.

Principal component analysis conversion of a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components

Principal component analysis (PCA) is a statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components. If there are observations with variables, then the number of distinct principal components is . This transformation is defined in such a way that the first principal component has the largest possible variance, and each succeeding component in turn has the highest variance possible under the constraint that it is orthogonal to the preceding components. The resulting vectors are an uncorrelated orthogonal basis set. PCA is sensitive to the relative scaling of the original variables.

Singular value decomposition matrix decomposition

In linear algebra, the singular-value decomposition (SVD) is a factorization of a real or complex matrix. It is the generalization of the eigendecomposition of a positive semidefinite normal matrix to any matrix via an extension of the polar decomposition. It has many useful applications in signal processing and statistics.

Lattice (group) subgroup of a real vector space or a Lie group

In geometry and group theory, a lattice in is a subgroup of the additive group which is isomorphic to the additive group , and which spans the real vector space . In other words, for any basis of , the subgroup of all linear combinations with integer coefficients of the basis vectors forms a lattice. A lattice may be viewed as a regular tiling of a space by a primitive cell.

Reciprocal lattice Fourier transform of real-space lattices, important in solid-state physics

In physics, the reciprocal lattice represents the Fourier transform of another lattice. In normal usage, this first lattice is usually a periodic spatial function in real-space and is also known as the direct lattice. While the direct lattice exists in real-space and is what one would commonly understand as a physical lattice, the reciprocal lattice exists in reciprocal space The reciprocal of a reciprocal lattice is the original direct lattice, since the two are Fourier transforms of each other.

In linear algebra, the Gram matrix of a set of vectors in an inner product space is the Hermitian matrix of inner products, whose entries are given by .

Superposition principle fundamental physics principle stating that physical solutions of linear systems are linear

The superposition principle, also known as superposition property, states that, for all linear systems, the net response caused by two or more stimuli is the sum of the responses that would have been caused by each stimulus individually. So that if input A produces response X and input B produces response Y then input produces response.

Conjugate gradient method

In mathematics, the conjugate gradient method is an algorithm for the numerical solution of particular systems of linear equations, namely those whose matrix is symmetric and positive-definite. The conjugate gradient method is often implemented as an iterative algorithm, applicable to sparse systems that are too large to be handled by a direct implementation or other direct methods such as the Cholesky decomposition. Large sparse systems often arise when numerically solving partial differential equations or optimization problems.

The Lanczos algorithm is a direct algorithm devised by Cornelius Lanczos that is an adaptation of power methods to find the most useful eigenvalues and eigenvectors of an Hermitian matrix, where is often but not necessarily much smaller than . Although computationally efficient in principle, the method as initially formulated was not useful, due to its numerical instability. In 1970, Ojalvo and Newman showed how to make the method numerically stable and applied it to the solution of very large engineering structures subjected to dynamic loading. This was achieved using a method for purifying the vectors to any degree of accuracy, which when not performed, produced a series of vectors that were highly contaminated by those associated with the lowest natural frequencies. In their original work, these authors also suggested how to select a starting vector and suggested an empirically determined method for determining , the reduced number of vectors. Soon thereafter their work was followed by Paige, who also provided an error analysis. In 1988, Ojalvo produced a more detailed history of this algorithm and an efficient eigenvalue error test. Currently, the method is widely used in a variety of technical fields and has seen a number of variations.

Kernel method class of algorithms for pattern analysis

In machine learning, kernel methods are a class of algorithms for pattern analysis, whose best known member is the support vector machine (SVM). The general task of pattern analysis is to find and study general types of relations in datasets. In its simplest form, the kernel trick means transforming data into another dimension that has a clear dividing margin between classes of data. For many algorithms that solve these tasks, the data in raw representation have to be explicitly transformed into feature vector representations via a user-specified feature map: in contrast, kernel methods require only a user-specified kernel, i.e., a similarity function over pairs of data points in raw representation.

Non-negative matrix factorization

Non-negative matrix factorization, also non-negative matrix approximation is a group of algorithms in multivariate analysis and linear algebra where a matrix V is factorized into (usually) two matrices W and H, with the property that all three matrices have no negative elements. This non-negativity makes the resulting matrices easier to inspect. Also, in applications such as processing of audio spectrograms or muscular activity, non-negativity is inherent to the data being considered. Since the problem is not exactly solvable in general, it is commonly approximated numerically.

Corner detection

Corner detection is an approach used within computer vision systems to extract certain kinds of features and infer the contents of an image. Corner detection is frequently used in motion detection, image registration, video tracking, image mosaicing, panorama stitching, 3D modelling and object recognition. Corner detection overlaps with the topic of interest point detection.

In mathematics, preconditioning is the application of a transformation, called the preconditioner, that conditions a given problem into a form that is more suitable for numerical solving methods. Preconditioning is typically related to reducing a condition number of the problem. The preconditioned problem is then usually solved by an iterative method.

In mathematics, the structure tensor, also referred to as the second-moment matrix, is a matrix derived from the gradient of a function. It summarizes the predominant directions of the gradient in a specified neighborhood of a point, and the degree to which those directions are coherent. The structure tensor is often used in image processing and computer vision.

Numerical linear algebra is the study of how matrix operations can be used to create computer algorithms which efficiently and accurately provide approximate answers to mathematical questions. It is a subfield of numerical analysis, and a type of linear algebra. Because computers use floating-point arithmetic, they cannot exactly represent irrational data, and many algorithms increase that imprecision when implemented by a computer. Numerical linear algebra uses properties of vectors and matrices to develop computer algorithms that minimize computer error while retaining efficiency and precision.

In time series analysis, singular spectrum analysis (SSA) is a nonparametric spectral estimation method. It combines elements of classical time series analysis, multivariate statistics, multivariate geometry, dynamical systems and signal processing. Its roots lie in the classical Karhunen (1946)–Loève spectral decomposition of time series and random fields and in the Mañé (1981)–Takens (1981) embedding theorem. SSA can be an aid in the decomposition of time series into a sum of components, each having a meaningful interpretation. The name "singular spectrum analysis" relates to the spectrum of eigenvalues in a singular value decomposition of a covariance matrix, and not directly to a frequency domain decomposition.

The Goldreich–Goldwasser–Halevi (GGH) lattice-based cryptosystem is an asymmetric cryptosystem based on lattices. There is also a GGH signature scheme.

A coupled map lattice (CML) is a dynamical system that models the behavior of non-linear systems. They are predominantly used to qualitatively study the chaotic dynamics of spatially extended systems. This includes the dynamics of spatiotemporal chaos where the number of effective degrees of freedom diverges as the size of the system increases.

Image texture

An image texture is a set of metrics calculated in image processing designed to quantify the perceived texture of an image. Image texture gives us information about the spatial arrangement of color or intensities in an image or selected region of an image.

References

  1. Rosa, R.R., Pontes, J., Christov, C.I., Ramos, F.M., Rodrigues Neto, C., Rempel, E.L., Walgraef, D. Physica A283, 156 (2000).
  2. Rosa, R.R.; Sharma, A.S.and Valdivia, J.A. Int. J. Mod. Phys. C, 10, 147 (1999), doi : 10.1142/S0129183199000103.
  3. Rosa, R.R.; Campos, M.R.; Ramos, F.M.; Vijaykumar, N.L.; Fujiwara, S.; Sato, T. Braz. J. Phys.33, 605 (2003).
  4. Rosa, R.R. et al., Advances in Space Research42, 844 (2008), doi : 10.1016/j.asr.2007.08.015.