Census transform

Last updated
Example of census transform
Glasses 800 edit.png
A synthetic scene
Census transform example.png
Grayscale conversion followed by census transform

The census transform (CT) is an image operator that associates to each pixel of a grayscale image a binary string, encoding whether the pixel has smaller intensity than each of its neighbours, one for each bit. It is a non-parametric transform that depends only on relative ordering of intensities, and not on the actual values of intensity, making it invariant with respect to monotonic variations of illumination, and it behaves well in presence of multimodal distributions of intensity, e.g. along object boundaries. [1] It has applications in computer vision, and it is commonly used in visual correspondence problems such as optical flow calculation and disparity estimation. [2]

Contents

The census transform is related to the rank transform, that associates to each pixel the number of neighbouring pixels with higher intensity than the pixel itself, and was introduced in the same paper. [3]

Algorithm

The most common version of the census transform uses a 3x3 window, comparing each pixel with all its 8-connected neighbours with a function defined as

The results of these comparisons are concatenated and the value of the transform is an 8-bit value, that can be easily encoded in a byte.

Similarity between images is determined by comparing the values of the census transform for corresponding pixels, using the Hamming distance. [3] Several variations of the algorithm exist, using different size of the window, order of the neighbours in the pattern (row-wise, clockwise, counterclockwise), comparison operator (greater, greater or equal, lesser, lesser or equal). [4]

An extension of the algorithm uses a three-way comparison that allows to represent similar pixels, whose intensity difference is smaller than a tolerance parameter , defined as [5]

whose result can be encoded with two bits for each neighbour, thus doubling the size of the pattern for each pixel.

See also

Related Research Articles

Pushdown automaton Type of automaton

In the theory of computation, a branch of theoretical computer science, a pushdown automaton (PDA) is a type of automaton that employs a stack.

Dirac delta function Pseudo-function δ such that an integral of δ(x-c)f(x) always takes the value of f(c)

In mathematics, the Dirac delta function, also known as the unit impulse symbol, is a generalized function or distribution over the real numbers, whose value is zero everywhere except at zero, and whose integral over the entire real line is equal to one. It can also be interpreted as a linear functional that maps every function to its value at zero, or as the weak limit of a sequence of bump functions, which are zero over most of the real line, with a tall spike at the origin. Bump functions are thus sometimes called "approximate" or "nascent" delta functions.

Greek numerals, also known as Ionic, Ionian, Milesian, or Alexandrian numerals, are a system of writing numbers using the letters of the Greek alphabet. In modern Greece, they are still used for ordinal numbers and in contexts similar to those in which Roman numerals are still used in the Western world. For ordinary cardinal numbers, however, modern Greece uses Arabic numerals.

Synchrotron radiation quantum

Synchrotron radiation is the electromagnetic radiation emitted when charged particles are accelerated radially, e.g., when they are subject to an acceleration perpendicular to their velocity. It is produced, for example, in synchrotrons using bending magnets, undulators and/or wigglers. If the particle is non-relativistic, the emission is called cyclotron emission. If the particles are relativistic, sometimes referred to as ultrarelativistic, the emission is called synchrotron emission. Synchrotron radiation may be achieved artificially in synchrotrons or storage rings, or naturally by fast electrons moving through magnetic fields. The radiation produced in this way has a characteristic polarization and the frequencies generated can range over the entire electromagnetic spectrum, which is also called continuum radiation.

Additive white Gaussian noise (AWGN) is a basic noise model used in information theory to mimic the effect of many random processes that occur in nature. The modifiers denote specific characteristics:

Channel capacity, in electrical engineering, computer science, and information theory, is the tight upper bound on the rate at which information can be reliably transmitted over a communication channel.

An elliptic filter is a signal processing filter with equalized ripple (equiripple) behavior in both the passband and the stopband. The amount of ripple in each band is independently adjustable, and no other filter of equal order can have a faster transition in gain between the passband and the stopband, for the given values of ripple. Alternatively, one may give up the ability to adjust independently the passband and stopband ripple, and instead design a filter which is maximally insensitive to component variations.

Linearized gravity

In the theory of general relativity, linearized gravity is the application of perturbation theory to the metric tensor that describes the geometry of spacetime. As a consequence, linearized gravity is an effective method for modeling the effects of gravity when the gravitational field is weak. The usage of linearized gravity is integral to the study of gravitational waves and weak-field gravitational lensing.

In statistics, McNemar's test is a statistical test used on paired nominal data. It is applied to 2 × 2 contingency tables with a dichotomous trait, with matched pairs of subjects, to determine whether the row and column marginal frequencies are equal. It is named after Quinn McNemar, who introduced it in 1947. An application of the test in genetics is the transmission disequilibrium test for detecting linkage disequilibrium. The commonly used parameters to assess a diagnostic test in medical sciences are sensitivity and specificity. Sensitivity is the ability of a test to correctly identify the people with disease. Specificity is the ability of the test to correctly identify those without the disease. Now presume two tests are performed on the same group of patients. And also presume that these tests have identical sensitivity and specificity. In this situation one is carried away by these findings and presume that both the tests are equivalent. However this may not be the case. For this we have to study the patients with disease and patients without disease. We also have to find out where these two tests disagree with each other. This is precisely the basis of McNemar's test. This test compares the sensitivity and specificity of two diagnostic tests on the same group of patients.

In information theory, the noisy-channel coding theorem, establishes that for any given degree of noise contamination of a communication channel, it is possible to communicate discrete data nearly error-free up to a computable maximum rate through the channel. This result was presented by Claude Shannon in 1948 and was based in part on earlier work and ideas of Harry Nyquist and Ralph Hartley.

In the Standard Model, using quantum field theory it is conventional to use the helicity basis to simplify calculations. In this basis, the spin is quantized along the axis in the direction of motion of the particle.

In mathematics, the Johnson–Lindenstrauss lemma is a result named after William B. Johnson and Joram Lindenstrauss concerning low-distortion embeddings of points from high-dimensional into low-dimensional Euclidean space. The lemma states that a set of points in a high-dimensional space can be embedded into a space of much lower dimension in such a way that distances between the points are nearly preserved. The map used for the embedding is at least Lipschitz, and can even be taken to be an orthogonal projection.

Gravitational lensing formalism

In general relativity, a point mass deflects a light ray with impact parameter by an angle approximately equal to

In solid mechanics, the Johnson–Holmquist damage model is used to model the mechanical behavior of damaged brittle materials, such as ceramics, rocks, and concrete, over a range of strain rates. Such materials usually have high compressive strength but low tensile strength and tend to exhibit progressive damage under load due to the growth of microfractures.

Quantum characteristics are phase-space trajectories that arise in the phase space formulation of quantum mechanics through the Wigner transform of Heisenberg operators of canonical coordinates and momenta. These trajectories obey the Hamilton equations in quantum form and play the role of characteristics in terms of which time-dependent Weyl's symbols of quantum operators can be expressed. In the classical limit, quantum characteristics reduce to classical trajectories. The knowledge of quantum characteristics is equivalent to the knowledge of quantum dynamics.

The table of chords, created by the Greek astronomer, geometer, and geographer Ptolemy in Egypt during the 2nd century AD, is a trigonometric table in Book I, chapter 11 of Ptolemy's Almagest, a treatise on mathematical astronomy. It is essentially equivalent to a table of values of the sine function. It was the earliest trigonometric table extensive enough for many practical purposes, including those of astronomy. Centuries passed before more extensive trigonometric tables were created. One such table is the Canon Sinuum created at the end of the 16th century.

Accelerations in special relativity (SR) follow, as in Newtonian Mechanics, by differentiation of velocity with respect to time. Because of the Lorentz transformation and time dilation, the concepts of time and distance become more complex, which also leads to more complex definitions of "acceleration". SR as the theory of flat Minkowski spacetime remains valid in the presence of accelerations, because general relativity (GR) is only required when there is curvature of spacetime caused by the energy–momentum tensor. However, since the amount of spacetime curvature is not particularly high on Earth or its vicinity, SR remains valid for most practical purposes, such as experiments in particle accelerators.

Plotting algorithms for the Mandelbrot set Algorithms and methods of plotting the Mandelbrot set on a computing device

There are many programs and algorithms used to plot the Mandelbrot set and other fractals, some of which are described in fractal-generating software. These programs use a variety of algorithms to determine the color of individual pixels efficiently.

The two-dimensional critical Ising model is the critical limit of the Ising model in two dimensions. It is a two-dimensional conformal field theory whose symmetry algebra is the Virasoro algebra with the central charge . Correlation functions of the spin and energy operators are described by the minimal model. While the minimal model has been exactly solved, the solution does not cover other observables such as connectivities of clusters.

In mathematics, the Khatri–Rao product is defined as

References

  1. Zabih and Woodfill (1994), p. 152.
  2. Hafner et al. (2013).
  3. 1 2 Zabih and Woodfill (1994), p. 153.
  4. "Census Transform Algorithm Overview". Intel. Retrieved 2019-06-05.
  5. Stein (2004).