Lacunarity

Last updated

Figure 1. Basic fractal patterns increasing in lacunarity from left to right. Rotational Invariance Example.gif
Figure 1. Basic fractal patterns increasing in lacunarity from left to right.
The same images as above, rotated 90deg. Whereas the first two images appear essentially the same as they do above, the third looks different from its unrotated original. This feature is captured in measures of lacunarity listed across the top of the figures, as calculated using standard biological imaging box counting software FracLac, Image. Rotational Invariance rotated.gif
The same images as above, rotated 90°. Whereas the first two images appear essentially the same as they do above, the third looks different from its unrotated original. This feature is captured in measures of lacunarity listed across the top of the figures, as calculated using standard biological imaging box counting software FracLac , Image.

Lacunarity, from the Latin lacuna, meaning "gap" or "lake", is a specialized term in geometry referring to a measure of how patterns, especially fractals, fill space, where patterns having more or larger gaps generally have higher lacunarity. Beyond being an intuitive measure of gappiness, lacunarity can quantify additional features of patterns such as "rotational invariance" and more generally, heterogeneity. [1] [2] [3] This is illustrated in Figure 1 showing three fractal patterns. When rotated 90°, the first two fairly homogeneous patterns do not appear to change, but the third more heterogeneous figure does change and has correspondingly higher lacunarity. The earliest reference to the term in geometry is usually attributed to Benoit Mandelbrot, who, in 1983 or perhaps as early as 1977, introduced it as, in essence, an adjunct to fractal analysis. [4] Lacunarity analysis is now used to characterize patterns in a wide variety of fields and has application in multifractal analysis [5] [6] in particular (see Applications).

Contents

Measuring lacunarity

In many patterns or data sets, lacunarity is not readily perceivable or quantifiable, so computer-aided methods have been developed to calculate it. As a measurable quantity, lacunarity is often denoted in scientific literature by the Greek letters or but it is important to note that there is no single standard and several different methods exist to assess and interpret lacunarity.

Box counting lacunarity

Figure 2a. Boxes laid over an image as a fixed grid.
Figure 2b. Boxes slid over an image in an overlapping pattern. Fixedstack.gif
Figure 2a. Boxes laid over an image as a fixed grid.
Figure 2b. Boxes slid over an image in an overlapping pattern. Slidestack.gif
Figure 2b. Boxes slid over an image in an overlapping pattern.

One well-known method of determining lacunarity for patterns extracted from digital images uses box counting, the same essential algorithm typically used for some types of fractal analysis. [1] [4] Similar to looking at a slide through a microscope with changing levels of magnification, box counting algorithms look at a digital image from many levels of resolution to examine how certain features change with the size of the element used to inspect the image. Basically, the arrangement of pixels is measured using traditionally square (i.e., box-shaped) elements from an arbitrary set of sizes, conventionally denoted s. For each , a box of size is placed successively on the image, in the end covering it completely, and each time it is laid down, the number of pixels that fall within the box is recorded. [note 1] In standard box counting, the box for each in is placed as though it were part of a grid overlaid on the image so that the box does not overlap itself, but in sliding box algorithms the box is slid over the image so that it overlaps itself and the "Sliding Box Lacunarity" or SLac is calculated. [3] [7] Figure 2 illustrates both types of box counting.

Calculations from box counting

The data gathered for each are manipulated to calculate lacunarity. One measure, denoted here as , is found from the coefficient of variation (), calculated as the standard deviation () divided by the mean (), for pixels per box. [1] [3] [6] Because the way an image is sampled will depend on the arbitrary starting location, for any image sampled at any there will be some number () of possible orientations, each denoted here by , that the data can be gathered over, which can have varying effects on the measured distribution of pixels. [5] [note 2] Equation 1 shows the basic method of calculating :

 

 

 

 

(1)

Probability distributions

Alternatively, some methods sort the numbers of pixels counted into a probability distribution having bins, and use the bin sizes (masses, ) and their corresponding probabilities () to calculate according to Equations 2 through 5 :

 

 

 

 

(2)

 

 

 

 

(3)

 

 

 

 

(4)

 

 

 

 

(5)

Interpreting λ

Lacunarity based on has been assessed in several ways including by using the variation in or the average value of for each (see Equation 6 ) and by using the variation in or average over all grids (see Equation 7 ). [1] [5] [7] [8]

 

 

 

 

(6)

 

 

 

 

(7)

Relationship to the fractal dimension

Lacunarity analyses using the types of values discussed above have shown that data sets extracted from dense fractals, from patterns that change little when rotated, or from patterns that are homogeneous, have low lacunarity, but as these features increase,[ clarification needed ] so generally does lacunarity. In some instances, it has been demonstrated that fractal dimensions and values of lacunarity were correlated, [1] but more recent research has shown that this relationship does not hold for all types of patterns and measures of lacunarity. [5] Indeed, as Mandelbrot originally proposed, lacunarity has been shown to be useful in discerning amongst patterns (e.g., fractals, textures, etc.) that share or have similar fractal dimensions in a variety of scientific fields including neuroscience. [8]

Graphical lacunarity

Other methods of assessing lacunarity from box counting data use the relationship between values of lacunarity (e.g., ) and in different ways from the ones noted above. One such method looks at the vs plot of these values. According to this method, the curve itself can be analyzed visually, or the slope at can be calculated from the vs regression line. [3] [7] Because they tend to behave in certain ways for respectively mono-, multi-, and non-fractal patterns, vs lacunarity plots have been used to supplement methods of classifying such patterns. [5] [8]

To make the plots for this type of analysis, the data from box counting first have to be transformed as in Equation 9 :

 

 

 

 

(9)

This transformation avoids undefined values, which is important because homogeneous images will have at some equal to 0 so that the slope of the vs regression line would be impossible to find. With , homogeneous images have a slope of 0, corresponding intuitively to the idea of no rotational or translational invariance and no gaps. [9]

One box counting technique using a "gliding" box calculates lacunarity according to:

 

 

 

 

(10)

is the number of filled data points in the box and the normalized frequency distribution of for different box sizes.

Prefactor lacunarity

Another proposed way of assessing lacunarity using box counting, the Prefactor method, is based on the value obtained from box counting for the fractal dimension (). This statistic uses the variable from the scaling rule , where is calculated from the y-intercept () of the ln-ln regression line for and either the count () of boxes that had any pixels at all in them or else at . is particularly affected by image size and the way data are gathered, especially by the lower limit of s used. The final measure is calculated as shown in Equations 11 through 13 : [1] [4]

 

 

 

 

(11)

 

 

 

 

(12)

 

 

 

 

(13)

Applications

Below is a list of some fields where lacunarity plays an important role, along with links to relevant research illustrating practical uses of lacunarity.

Notes

  1. This contrasts with box counting fractal analysis where the total number of boxes that contained any pixels is counted to determine a fractal dimension.
  2. See FracLac , Box Counting for an explanation of methods to address variation with grid location

Related Research Articles

The Beer-Lambert law is commonly applied to chemical analysis measurements to determine the concentration of chemical species that absorb light. It is often referred to as Beer's law. In physics, the Bouguer–Lambert law is an empirical law which relates the extinction or attenuation of light to the properties of the material through which the light is travelling. It had its first use in astronomical extinction. The fundamental law of extinction is sometimes called the Beer-Bouguer-Lambert law or the Bouguer-Beer-Lambert law or merely the extinction law. The extinction law is also used in understanding attenuation in physical optics, for photons, neutrons, or rarefied gases. In mathematical physics, this law arises as a solution of the BGK equation.

Circular dichroism (CD) is dichroism involving circularly polarized light, i.e., the differential absorption of left- and right-handed light. Left-hand circular (LHC) and right-hand circular (RHC) polarized light represent two possible spin angular momentum states for a photon, and so circular dichroism is also referred to as dichroism for spin angular momentum. This phenomenon was discovered by Jean-Baptiste Biot, Augustin Fresnel, and Aimé Cotton in the first half of the 19th century. Circular dichroism and circular birefringence are manifestations of optical activity. It is exhibited in the absorption bands of optically active chiral molecules. CD spectroscopy has a wide range of applications in many different fields. Most notably, UV CD is used to investigate the secondary structure of proteins. UV/Vis CD is used to investigate charge-transfer transitions. Near-infrared CD is used to investigate geometric and electronic structure by probing metal d→d transitions. Vibrational circular dichroism, which uses light from the infrared energy region, is used for structural studies of small organic molecules, and most recently proteins and DNA.

In mathematical optimization, the method of Lagrange multipliers is a strategy for finding the local maxima and minima of a function subject to equation constraints. It is named after the mathematician Joseph-Louis Lagrange.

In engineering, deformation refers to the change in size or shape of an object. Displacements are the absolute change in position of a point on the object. Deflection is the relative change in external displacements on an object. Strain is the relative internal change in shape of an infinitesimal cube of material and can be expressed as a non-dimensional change in length or angle of distortion of the cube. Strains are related to the forces acting on the cube, which are known as stress, by a stress-strain curve. The relationship between stress and strain is generally linear and reversible up until the yield point and the deformation is elastic. The linear relationship for a material is known as Young's modulus. Above the yield point, some degree of permanent distortion remains after unloading and is termed plastic deformation. The determination of the stress and strain throughout a solid object is given by the field of strength of materials and for a structure by structural analysis.

In mathematics, a fractal dimension is a term invoked in the science of geometry to provide a rational statistical index of complexity detail in a pattern. A fractal pattern changes with the scale at which it is measured. It is also a measure of the space-filling capacity of a pattern, and it tells how a fractal scales differently, in a fractal (non-integer) dimension.

The Ising model, named after the physicists Ernst Ising and Wilhelm Lenz, is a mathematical model of ferromagnetism in statistical mechanics. The model consists of discrete variables that represent magnetic dipole moments of atomic "spins" that can be in one of two states. The spins are arranged in a graph, usually a lattice, allowing each spin to interact with its neighbors. Neighboring spins that agree have a lower energy than those that disagree; the system tends to the lowest energy but heat disturbs this tendency, thus creating the possibility of different structural phases. The model allows the identification of phase transitions as a simplified model of reality. The two-dimensional square-lattice Ising model is one of the simplest statistical models to show a phase transition.

<span class="mw-page-title-main">Eigenface</span> Set of eigenvectors used in the computer vision problem of human face recognition

An eigenface is the name given to a set of eigenvectors when used in the computer vision problem of human face recognition. The approach of using eigenfaces for recognition was developed by Sirovich and Kirby and used by Matthew Turk and Alex Pentland in face classification. The eigenvectors are derived from the covariance matrix of the probability distribution over the high-dimensional vector space of face images. The eigenfaces themselves form a basis set of all images used to construct the covariance matrix. This produces dimension reduction by allowing the smaller set of basis images to represent the original training images. Classification can be achieved by comparing how faces are represented by the basis set.

<span class="mw-page-title-main">Minkowski–Bouligand dimension</span> Method of determining fractal dimension

In fractal geometry, the Minkowski–Bouligand dimension, also known as Minkowski dimension or box-counting dimension, is a way of determining the fractal dimension of a set in a Euclidean space , or more generally in a metric space . It is named after the Polish mathematician Hermann Minkowski and the French mathematician Georges Bouligand.

In physics, mathematics and statistics, scale invariance is a feature of objects or laws that do not change if scales of length, energy, or other variables, are multiplied by a common factor, and thus represent a universality.

<span class="mw-page-title-main">Granular material</span> Conglomeration of discrete solid, macroscopic particles

A granular material is a conglomeration of discrete solid, macroscopic particles characterized by a loss of energy whenever the particles interact. The constituents that compose granular material are large enough such that they are not subject to thermal motion fluctuations. Thus, the lower size limit for grains in granular material is about 1 μm. On the upper size limit, the physics of granular materials may be applied to ice floes where the individual grains are icebergs and to asteroid belts of the Solar System with individual grains being asteroids.

In graph theory, a graph is said to be a pseudorandom graph if it obeys certain properties that random graphs obey with high probability. There is no concrete definition of graph pseudorandomness, but there are many reasonable characterizations of pseudorandomness one can consider.

The Poisson–Boltzmann equation describes the distribution of the electric potential in solution in the direction normal to a charged surface. This distribution is important to determine how the electrostatic interactions will affect the molecules in solution. The Poisson–Boltzmann equation is derived via mean-field assumptions. From the Poisson–Boltzmann equation many other equations have been derived with a number of different assumptions.

<span class="mw-page-title-main">Multifractal system</span> System with multiple fractal dimensions

A multifractal system is a generalization of a fractal system in which a single exponent is not enough to describe its dynamics; instead, a continuous spectrum of exponents is needed.

In the fields of computer vision and image analysis, the Harris affine region detector belongs to the category of feature detection. Feature detection is a preprocessing step of several algorithms that rely on identifying characteristic points or interest points so to make correspondences between images, recognize textures, categorize objects or build panoramas.

In probability and statistics, the Tweedie distributions are a family of probability distributions which include the purely continuous normal, gamma and inverse Gaussian distributions, the purely discrete scaled Poisson distribution, and the class of compound Poisson–gamma distributions which have positive mass at zero, but are otherwise continuous. Tweedie distributions are a special case of exponential dispersion models and are often used as distributions for generalized linear models.

<span class="mw-page-title-main">Viscoplasticity</span> Theory in continuum mechanics

Viscoplasticity is a theory in continuum mechanics that describes the rate-dependent inelastic behavior of solids. Rate-dependence in this context means that the deformation of the material depends on the rate at which loads are applied. The inelastic behavior that is the subject of viscoplasticity is plastic deformation which means that the material undergoes unrecoverable deformations when a load level is reached. Rate-dependent plasticity is important for transient plasticity calculations. The main difference between rate-independent plastic and viscoplastic material models is that the latter exhibit not only permanent deformations after the application of loads but continue to undergo a creep flow as a function of time under the influence of the applied load.

Fractal analysis is assessing fractal characteristics of data. It consists of several methods to assign a fractal dimension and other fractal characteristics to a dataset which may be a theoretical dataset, or a pattern or signal extracted from phenomena including topography, natural geometric objects, ecology and aquatic sciences, sound, market fluctuations, heart rates, frequency domain in electroencephalography signals, digital images, molecular motion, and data science. Fractal analysis is now widely used in all areas of science. An important limitation of fractal analysis is that arriving at an empirically determined fractal dimension does not necessarily prove that a pattern is fractal; rather, other essential characteristics have to be considered. Fractal analysis is valuable in expanding our knowledge of the structure and function of various systems, and as a potential tool to mathematically assess novel areas of study. Fractal calculus was formulated which is a generalization of ordinary calculus.

The near-infrared (NIR) window defines the range of wavelengths from 650 to 1350 nanometre (nm) where light has its maximum depth of penetration in tissue. Within the NIR window, scattering is the most dominant light-tissue interaction, and therefore the propagating light becomes diffused rapidly. Since scattering increases the distance travelled by photons within tissue, the probability of photon absorption also increases. Because scattering has weak dependence on wavelength, the NIR window is primarily limited by the light absorption of blood at short wavelengths and water at long wavelengths. The technique using this window is called NIRS. Medical imaging techniques such as fluorescence image-guided surgery often make use of the NIR window to detect deep structures.

<span class="mw-page-title-main">Box counting</span> Fractal analysis technique

Box counting is a method of gathering data for analyzing complex patterns by breaking a dataset, object, image, etc. into smaller and smaller pieces, typically "box"-shaped, and analyzing the pieces at each smaller scale. The essence of the process has been compared to zooming in or out using optical or computer based methods to examine how observations of detail change with scale. In box counting, however, rather than changing the magnification or resolution of a lens, the investigator changes the size of the element used to inspect the object or pattern. Computer based box counting algorithms have been applied to patterns in 1-, 2-, and 3-dimensional spaces. The technique is usually implemented in software for use on patterns extracted from digital media, although the fundamental method can be used to investigate some patterns physically. The technique arose out of and is used in fractal analysis. It also has application in related fields such as lacunarity and multifractal analysis.

<span class="mw-page-title-main">Mott–Schottky plot</span>

In semiconductor electrochemistry, a Mott–Schottky plot describes the reciprocal of the square of capacitance versus the potential difference between bulk semiconductor and bulk electrolyte. In many theories, and in many experimental measurements, the plot is linear. The use of Mott–Schottky plots to determine system properties is termed Mott–Schottky analysis.

References

  1. 1 2 3 4 5 6 Smith, T. G.; Lange, G. D.; Marks, W. B. (1996). "Fractal methods and results in cellular morphology — dimensions, lacunarity and multifractals". Journal of Neuroscience Methods. 69 (2): 123–136. doi:10.1016/S0165-0270(96)00080-5. PMID   8946315. S2CID   20175299.
  2. 1 2 Plotnick, R. E.; Gardner, R. H.; Hargrove, W. W.; Prestegaard, K.; Perlmutter, M. (1996). "Lacunarity analysis: A general technique for the analysis of spatial patterns". Physical Review E. 53 (5): 5461–8. Bibcode:1996PhRvE..53.5461P. doi:10.1103/physreve.53.5461. PMID   9964879.
  3. 1 2 3 4 Plotnick, R. E.; Gardner, R. H.; O'Neill, R. V. (1993). "Lacunarity indices as measures of landscape texture". Landscape Ecology. 8 (3): 201. doi:10.1007/BF00125351. S2CID   7112365.
  4. 1 2 3 Mandelbrot, Benoit (1983). The Fractal Geometry of Nature. ISBN   978-0-7167-1186-5.
  5. 1 2 3 4 5 Karperien (2004). "Chapter 8 Multifractality and Lacunarity". Defining Microglial Morphology: Form, Function, and Fractal Dimension. Charles Sturt University.
  6. 1 2 3 Al-Kadi, O.S.; Watson, D. (2008). "Texture Analysis of Aggressive and non-Aggressive Lung Tumor CE CT Images" (PDF). IEEE Transactions on Biomedical Engineering. 55 (7): 1822–30. doi:10.1109/TBME.2008.919735. PMID   18595800. S2CID   14784161. Archived from the original (PDF) on 2014-04-13. Retrieved 2014-04-10.
  7. 1 2 3 McIntyre, N. E.; Wiens, J. A. (2000). "A novel use of the lacunarity index to discern landscape function". Landscape Ecology. 15 (4): 313. doi:10.1023/A:1008148514268. S2CID   18644861.
  8. 1 2 3 Jelinek, Herbert; Karperien, Audrey; Milosevic, Nebojsa (June 2011). "Lacunarity Analysis and Classification of Microglia in Neuroscience". 8th European Conference on Mathematical and Theoretical Biology, Kraków.
  9. Karperien (2002). "Interpreting Lacunarity". FracLac.
  10. Tolle, C. (2003). "Lacunarity definition for ramified data sets based on optimal cover". Physica D: Nonlinear Phenomena. 179 (3–4): 129–201. Bibcode:2003PhyD..179..129T. doi:10.1016/S0167-2789(03)00029-0.
  11. Stevens, N. E.; Harro, D. R.; Hicklin, A. (2010). "Practical quantitative lithic use-wear analysis using multiple classifiers". Journal of Archaeological Science. 37 (10): 2671. doi:10.1016/j.jas.2010.06.004.
  12. Rievra-Virtudazo, R.V.; Tapia, A.K.G; Valenzuela, J.F.B.; Cruz, L.D.; Mendoza, H.D.; Castriciones, E.V. (23 November 2008). "47. Lacunarity analysis of TEM Images of Heat-Treated Hybrid Organosilica Materials". In Sener, Bilge (ed.). Innovations in Chemical Biology. Springer. pp. 397–404. ISBN   978-1-4020-6955-0.
  13. Filho, M.B.; Sobreira, F. (2008). "Accuracy of Lacunarity Algorithms in Texture Classification of High Spatial Resolution Images from Urban Areas" (PDF). The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. XXXVII (Part B3b).
  14. Gorsich, D. J.; Tolle, C. R.; Karlsen, R. E.; Gerhart, G. R. (1996). "Wavelet and fractal analysis of ground-vehicle images". In Unser, Michael A; Aldroubi, Akram; Laine, Andrew F (eds.). Wavelet Applications in Signal and Image Processing IV. Wavelet Applications in Signal and Image Processing IV. Vol. 2825. pp. 109–119. doi:10.1117/12.255224. S2CID   121560110.
  15. Vannucchi, P.; Leoni, L. (30 October 2007). "Structural characterization of the Costa Rica decollement: Evidence for seismically-induced fluid pulsing". Earth and Planetary Science Letters. 262 (3–4): 413–428. Bibcode:2007E&PSL.262..413V. doi:10.1016/j.epsl.2007.07.056. hdl: 2158/257208 .
  16. Yaşar, F.; Akgünlü, F. (2005). "Fractal dimension and lacunarity analysis of dental radiographs". Dentomaxillofacial Radiology. 34 (5): 261–267. doi:10.1259/dmfr/85149245. PMID   16120874.
  17. Valous, N.A.; Sun, D.-W.; Allen, P.; Mendoza, F. (January 2010). "The use of lacunarity for visual texture characterization of pre-sliced cooked pork ham surface intensities Food Research International". 43 (1): 387–395. doi:10.1016/j.foodres.2009.10.018.{{cite journal}}: Cite journal requires |journal= (help)