Summed-area table

Last updated
Using a summed-area table (2.) of an order-6 magic square (1.) to sum up a subrectangle of its values; each coloured spot highlights the sum inside the rectangle of that colour. Integral image application example.svg
Using a summed-area table (2.) of an order-6 magic square (1.) to sum up a subrectangle of its values; each coloured spot highlights the sum inside the rectangle of that colour.

A summed-area table is a data structure and algorithm for quickly and efficiently generating the sum of values in a rectangular subset of a grid. In the image processing domain, it is also known as an integral image. It was introduced to computer graphics in 1984 by Frank Crow for use with mipmaps. In computer vision it was popularized by Lewis [1] and then given the name "integral image" and prominently used within the Viola–Jones object detection framework in 2001. Historically, this principle is very well known in the study of multi-dimensional probability distribution functions, namely in computing 2D (or ND) probabilities (area under the probability distribution) from the respective cumulative distribution functions. [2]

Contents

The algorithm

As the name suggests, the value at any point (x, y) in the summed-area table is the sum of all the pixels above and to the left of (x, y), inclusive: [3] [4] where is the value of the pixel at (x,y).

The summed-area table can be computed efficiently in a single pass over the image, as the value in the summed-area table at (x, y) is just: [5] (Noted that the summed matrix is calculated from top left corner)

A description of computing a sum in the summed-area table data structure/algorithm Summed area table.png
A description of computing a sum in the summed-area table data structure/algorithm

Once the summed-area table has been computed, evaluating the sum of intensities over any rectangular area requires exactly four array references regardless of the area size. That is, the notation in the figure at right, having A = (x0, y0), B = (x1, y0), C = (x0, y1) and D = (x1, y1), the sum of i(x,y) over the rectangle spanned by A, B, C, and D is:

Extensions

This method is naturally extended to continuous domains. [2]

The method can be also extended to high-dimensional images. [6] If the corners of the rectangle are with in , then the sum of image values contained in the rectangle are computed with the formula where is the integral image at and the image dimension. The notation correspond in the example to , , , and . In neuroimaging, for example, the images have dimension or , when using voxels or voxels with a time-stamp.

This method has been extended to high-order integral image as in the work of Phan et al. [7] who provided two, three, or four integral images for quickly and efficiently calculating the standard deviation (variance), skewness, and kurtosis of local block in the image. This is detailed below:

To compute variance or standard deviation of a block, we need two integral images: The variance is given by: Let and denote the summations of block of and , respectively. and are computed quickly by integral image. Now, we manipulate the variance equation as: Where and .

Similar to the estimation of the mean () and variance (), which requires the integral images of the first and second power of the image respectively (i.e. ); manipulations similar to the ones mentioned above can be made to the third and fourth powers of the images (i.e. .) for obtaining the skewness and kurtosis. [7] But one important implementation detail that must be kept in mind for the above methods, as mentioned by F Shafait et al. [8] is that of integer overflow occurring for the higher order integral images in case 32-bit integers are used.

Implementation considerations

The data type for the sums may need to be different and larger than the data type used for the original values, in order to accommodate the largest expected sum without overflow. For floating-point data, error can be reduced using compensated summation.

See also

Related Research Articles

<span class="mw-page-title-main">Expected value</span> Average value of a random variable

In probability theory, the expected value is a generalization of the weighted average. Informally, the expected value is the arithmetic mean of the possible values a random variable can take, weighted by the probability of those outcomes. Since it is obtained through arithmetic, the expected value sometimes may not even be included in the sample data set; it is not the value you would "expect" to get in reality.

In probability theory and statistics, kurtosis is a measure of the "tailedness" of the probability distribution of a real-valued random variable. Like skewness, kurtosis describes a particular aspect of a probability distribution. There are different ways to quantify kurtosis for a theoretical distribution, and there are corresponding ways of estimating it using a sample from a population. Different measures of kurtosis may have different interpretations.

<span class="mw-page-title-main">Normal distribution</span> Probability distribution

In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is The parameter is the mean or expectation of the distribution, while the parameter is the variance. The standard deviation of the distribution is . A random variable with a Gaussian distribution is said to be normally distributed, and is called a normal deviate.

<span class="mw-page-title-main">Standard deviation</span> In statistics, a measure of variation

In statistics, the standard deviation is a measure of the amount of variation of a random variable expected about its mean. A low standard deviation indicates that the values tend to be close to the mean of the set, while a high standard deviation indicates that the values are spread out over a wider range. The standard deviation is commonly used in the determination of what constitutes an outlier and what does not.

<span class="mw-page-title-main">Variance</span> Statistical measure of how far values spread from their average

In probability theory and statistics, variance is the expected value of the squared deviation from the mean of a random variable. The standard deviation (SD) is obtained as the square root of the variance. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbers is spread out from their average value. It is the second central moment of a distribution, and the covariance of the random variable with itself, and it is often represented by , , , , or .

The weighted arithmetic mean is similar to an ordinary arithmetic mean, except that instead of each of the data points contributing equally to the final average, some data points contribute more than others. The notion of weighted mean plays a role in descriptive statistics and also occurs in a more general form in several other areas of mathematics.

In probability theory, the central limit theorem (CLT) states that, under appropriate conditions, the distribution of a normalized version of the sample mean converges to a standard normal distribution. This holds even if the original variables themselves are not normally distributed. There are several versions of the CLT, each applying in the context of different conditions.

<span class="mw-page-title-main">Log-normal distribution</span> Probability distribution

In probability theory, a log-normal (or lognormal) distribution is a continuous probability distribution of a random variable whose logarithm is normally distributed. Thus, if the random variable X is log-normally distributed, then Y = ln(X) has a normal distribution. Equivalently, if Y has a normal distribution, then the exponential function of Y, X = exp(Y), has a log-normal distribution. A random variable that is log-normally distributed takes only positive real values. It is a convenient and useful model for measurements in the natural sciences, engineering, as well as medicine, economics and other fields. It can be applied to diverse quantities such as energies, concentrations, lengths, prices of financial instruments, and other metrics, while acknowledging the inherent uncertainty in all measurements.

In probability theory, Chebyshev's inequality provides an upper bound on the probability of deviation of a random variable from its mean. More specifically, the probability that a random variable deviates from its mean by more than is at most , where is any positive constant and is the standard deviation.

<span class="mw-page-title-main">Law of large numbers</span> Averages of repeated trials converge to the expected value

In probability theory, the law of large numbers (LLN) is a mathematical theorem that states that the average of the results obtained from a large number of independent random samples converges to the true value, if it exists. More formally, the LLN states that given a sample of independent and identically distributed values, the sample mean converges to the true mean.

<span class="mw-page-title-main">Riemann sum</span> Approximation technique in integral calculus

In mathematics, a Riemann sum is a certain kind of approximation of an integral by a finite sum. It is named after nineteenth century German mathematician Bernhard Riemann. One very common application is in numerical integration, i.e., approximating the area of functions or lines on a graph, where it is also known as the rectangle rule. It can also be applied for approximating the length of curves and other approximations.

In statistics, the mean squared error (MSE) or mean squared deviation (MSD) of an estimator measures the average of the squares of the errors—that is, the average squared difference between the estimated values and the actual value. MSE is a risk function, corresponding to the expected value of the squared error loss. The fact that MSE is almost always strictly positive is because of randomness or because the estimator does not account for information that could produce a more accurate estimate. In machine learning, specifically empirical risk minimization, MSE may refer to the empirical risk, as an estimate of the true MSE.

<span class="mw-page-title-main">Beta distribution</span> Probability distribution

In probability theory and statistics, the beta distribution is a family of continuous probability distributions defined on the interval [0, 1] or in terms of two positive parameters, denoted by alpha (α) and beta (β), that appear as exponents of the variable and its complement to 1, respectively, and control the shape of the distribution.

In mathematics, the moments of a function are certain quantitative measures related to the shape of the function's graph. If the function represents mass density, then the zeroth moment is the total mass, the first moment is the center of mass, and the second moment is the moment of inertia. If the function is a probability distribution, then the first moment is the expected value, the second central moment is the variance, the third standardized moment is the skewness, and the fourth standardized moment is the kurtosis.

In probability theory, a compound Poisson distribution is the probability distribution of the sum of a number of independent identically-distributed random variables, where the number of terms to be added is itself a Poisson-distributed variable. The result can be either a continuous or a discrete distribution.

The control variates method is a variance reduction technique used in Monte Carlo methods. It exploits information about the errors in estimates of known quantities to reduce the error of an estimate of an unknown quantity.

The Anderson–Darling test is a statistical test of whether a given sample of data is drawn from a given probability distribution. In its basic form, the test assumes that there are no parameters to be estimated in the distribution being tested, in which case the test and its set of critical values is distribution-free. However, the test is most often used in contexts where a family of distributions is being tested, in which case the parameters of that family need to be estimated and account must be taken of this in adjusting either the test-statistic or its critical values. When applied to testing whether a normal distribution adequately describes a set of data, it is one of the most powerful statistical tools for detecting most departures from normality. K-sample Anderson–Darling tests are available for testing whether several collections of observations can be modelled as coming from a single population, where the distribution function does not have to be specified.

In probability and statistics, the Tweedie distributions are a family of probability distributions which include the purely continuous normal, gamma and inverse Gaussian distributions, the purely discrete scaled Poisson distribution, and the class of compound Poisson–gamma distributions which have positive mass at zero, but are otherwise continuous. Tweedie distributions are a special case of exponential dispersion models and are often used as distributions for generalized linear models.

In probability theory and statistics, the Poisson binomial distribution is the discrete probability distribution of a sum of independent Bernoulli trials that are not necessarily identically distributed. The concept is named after Siméon Denis Poisson.

In statistics, the variance function is a smooth function that depicts the variance of a random quantity as a function of its mean. The variance function is a measure of heteroscedasticity and plays a large role in many settings of statistical modelling. It is a main ingredient in the generalized linear model framework and a tool used in non-parametric regression, semiparametric regression and functional data analysis. In parametric modeling, variance functions take on a parametric form and explicitly describe the relationship between the variance and the mean of a random quantity. In a non-parametric setting, the variance function is assumed to be a smooth function.

References

  1. Lewis, J.P. (1995). Fast template matching. Proc. Vision Interface. pp. 120–123.
  2. 1 2 Finkelstein, Amir; neeratsharma (2010). "Double Integrals By Summing Values Of Cumulative Distribution Function". Wolfram Demonstration Project.
  3. Crow, Franklin (1984). "Summed-area tables for texture mapping". SIGGRAPH '84: Proceedings of the 11th annual conference on Computer graphics and interactive techniques. pp. 207–212. doi:10.1145/800031.808600.
  4. Viola, Paul; Jones, Michael (2002). "Robust Real-time Object Detection" (PDF). International Journal of Computer Vision.
  5. BADGERATI (2010-09-03). "Computer Vision – The Integral Image". computersciencesource.wordpress.com. Retrieved 2017-02-13.
  6. Tapia, Ernesto (January 2011). "A note on the computation of high-dimensional integral images". Pattern Recognition Letters. 32 (2): 197–201. Bibcode:2011PaReL..32..197T. doi:10.1016/j.patrec.2010.10.007.
  7. 1 2 Phan, Thien; Sohoni, Sohum; Larson, Eric C.; Chandler, Damon M. (22 April 2012). "Performance-analysis-based acceleration of image quality assessment". 2012 IEEE Southwest Symposium on Image Analysis and Interpretation (PDF). pp. 81–84. CiteSeerX   10.1.1.666.4791 . doi:10.1109/SSIAI.2012.6202458. hdl:11244/25701. ISBN   978-1-4673-1830-3. S2CID   12472935.
  8. Shafait, Faisal; Keysers, Daniel; M. Breuel, Thomas (January 2008). Yanikoglu, Berrin A.; Berkner, Kathrin (eds.). "Efficient implementation of local adaptive thresholding techniques using integral images" (PDF). Electronic Imaging. Document Recognition and Retrieval XV. 6815: 681510–681510–6. Bibcode:2008SPIE.6815E..10S. CiteSeerX   10.1.1.109.2748 . doi:10.1117/12.767755. S2CID   9284084.

Lecture videos