Histogram equalization

Last updated

Histogram equalization is a method in image processing of contrast adjustment using the image's histogram.

Contents

Histograms of an image before and after equalization. Histogrammeinebnung.png
Histograms of an image before and after equalization.

Overview

This method usually increases the global contrast of many images, especially when the image is represented by a narrow range of intensity values. Through this adjustment, the intensities can be better distributed on the histogram utilizing the full range of intensities evenly. This allows for areas of lower local contrast to gain a higher contrast. Histogram equalization accomplishes this by effectively spreading out the highly populated intensity values which are used to degrade image contrast.

The method is useful in images with backgrounds and foregrounds that are both bright or both dark. In particular, the method can lead to better views of bone structure in x-ray images, and to better detail in photographs that are either over or under-exposed. A key advantage of the method is that it is a fairly straightforward technique adaptive to the input image and an invertible operator. So in theory, if the histogram equalization function is known, then the original histogram can be recovered. The calculation is not computationally intensive. A disadvantage of the method is that it is indiscriminate. It may increase the contrast of background noise, while decreasing the usable signal.

In scientific imaging where spatial correlation is more important than intensity of signal (such as separating DNA fragments of quantized length), the small signal-to-noise ratio usually hampers visual detections.

Histogram equalization often produces unrealistic effects in photographs; however it is very useful for scientific images like thermal, satellite or x-ray images, often the same class of images to which one would apply false-color. Also histogram equalization can produce undesirable effects (like visible image gradient) when applied to images with low color depth. For example, if applied to 8-bit image displayed with 8-bit gray-scale palette it will further reduce color depth (number of unique shades of gray) of the image. Histogram equalization will work the best when applied to images with much higher color depth than palette size, like continuous data or 16-bit gray-scale images.

There are two ways to think about and implement histogram equalization, either as image change or as palette change. The operation can be expressed as P(M(I)) where I is the original image, M is histogram equalization mapping operation and P is a palette. If we define a new palette as P'=P(M) and leave image I unchanged then histogram equalization is implemented as palette change or mapping change. On the other hand, if palette P remains unchanged and image is modified to I'=M(I) then the implementation is accomplished by image change. In most cases palette change is better as it preserves the original data.

Modifications of this method use multiple histograms, called subhistograms, to emphasize local contrast, rather than overall global contrast. Examples of such methods include adaptive histogram equalization, contrast limiting adaptive histogram equalization or CLAHE, multipeak histogram equalization (MPHE), and multipurpose beta optimized bihistogram equalization (MBOBHE). The goal of these methods, especially MBOBHE, is to improve the contrast without producing brightness mean-shift and detail loss artifacts by modifying the HE algorithm. [1]

A signal transform equivalent to histogram equalization also seems to happen in biological neural networks so as to maximize the output firing rate of the neuron as a function of the input statistics. This has been proved in particular in the fly retina. [2]

Histogram equalization is a specific case of the more general class of histogram remapping methods. These methods seek to adjust the image to make it easier to analyze or improve visual quality (e.g., retinex)

Back projection

The back projection (or "project") of a histogrammed image is the re-application of the modified histogram to the original image, functioning as a look-up table for pixel brightness values.

For each group of pixels taken from the same position from all input single-channel images, the function puts the histogram bin value to the destination image, where the coordinates of the bin are determined by the values of pixels in this input group. In terms of statistics, the value of each output image pixel characterizes the probability that the corresponding input pixel group belongs to the object whose histogram is used. [3]

Implementation

Consider a discrete grayscale image {x} and let ni be the number of occurrences of gray level i. The probability of an occurrence of a pixel of level i in the image is

being the total number of gray levels in the image (typically 256), n being the total number of pixels in the image, and being in fact the image's histogram for pixel value i, normalized to [0,1].

Let us also define the cumulative distribution function corresponding to i as

,

which is also the image's accumulated normalized histogram.

We would like to create a transformation of the form to produce a new image {y}, with a flat histogram. Such an image would have a linearized cumulative distribution function (CDF) across the value range, i.e.

for

for some constant . The properties of the CDF allow us to perform such a transform (see Inverse distribution function); it is defined as

where is in the range . Notice that maps the levels into the range [0,1], since we used a normalized histogram of {x}. In order to map the values back into their original range, the following simple transformation needs to be applied on the result:

.

A more detailed derivation is provided here.

is a real value while has to be an integer. An intuitive and popular method [4] is applying the round operation:

.

However, detailed analysis results in slightly different formulation. The mapped value should be 0 for the range of . And for , for , ...., and finally for . Then the quantization formula from to should be

.

(Note: when , however, it does not happen just because means that there is no pixel corresponding to that value.)

Of color images

The above describes histogram equalization on a grayscale image. However it can also be used on color images by applying the same method separately to the Red, Green and Blue components of the RGB color values of the image. However, applying the same method on the Red, Green, and Blue components of an RGB image may yield dramatic changes in the image's color balance since the relative distributions of the color channels change as a result of applying the algorithm. However, if the image is first converted to another color space, Lab color space, or HSL/HSV color space in particular, then the algorithm can be applied to the luminance or value channel without resulting in changes to the hue and saturation of the image. [5] There are several histogram equalization methods in 3D space. Trahanias and Venetsanopoulos applied histogram equalization in 3D color space [6] However, it results in "whitening" where the probability of bright pixels are higher than that of dark ones. [7] Han et al. proposed to use a new cdf defined by the iso-luminance plane, which results in uniform gray distribution. [8]

Examples

For consistency with statistical usage, "CDF" (i.e. Cumulative distribution function) should be replaced by "cumulative histogram", especially since the article links to cumulative distribution function which is derived by dividing values in the cumulative histogram by the overall amount of pixels. The equalized CDF is defined in terms of rank as .

Small image

The 8x8 sub-image shown in 8-bit grayscale JPEG example subimage.svg
The 8×8 sub-image shown in 8-bit grayscale

The 8-bit grayscale image shown has the following values:

5255615979617661
62595510494855971
6365661131441046372
6470701261541097169
677368106122886868
6879607077665875
6985645855616583
7087696865737890


The histogram for this image is shown in the following table. Pixel values that have a zero count are excluded for the sake of brevity.

ValueCountValueCountValueCountValueCountValueCount
5216427218521131
5536537328711221
5826627518811261
5936717619011441
6016857719411541
6146937811042
6217047921061
6327128311091

The cumulative distribution function (cdf) is shown below. Again, pixel values that do not contribute to an increase in the cdf are excluded for brevity.

v, Pixel Intensitycdf(v)h(v), Equalized v
5210
55412
58620
59932
601036
611453
621557
631765
641973
652285
662493
672597
6830117
6933130
7037146
7139154
7240158
7342166
7543170
7644174
7745178
7846182
7948190
8349194
8551202
8752206
8853210
9054215
9455219
10457227
10658231
10959235
11360239
12261243
12662247
14463251
15464255
(Please note that version is not illustrated yet.)

This cdf shows that the minimum value in the subimage is 52 and the maximum value is 154. The cdf of 64 for value 154 coincides with the number of pixels in the image. The cdf must be normalized to . The general histogram equalization formula is:

where cdfmin is the minimum non-zero value of the cumulative distribution function (in this case 1), M × N gives the image's number of pixels (for the example above 64, where M is width and N the height) and L is the number of grey levels used (in most cases, like this one, 256).


Note that to scale values in the original data that are above 0 to the range 1 to L-1, inclusive, the above equation would instead be:

where cdf(v) > 0. Scaling from 1 to 255 preserves the non-zero-ness of the minimum value.


The equalization formula for the example scaling data from 0 to 255, inclusive, is:

For example, the cdf of 78 is 46. (The value of 78 is used in the bottom row of the 7th column.) The normalized value becomes

Once this is done then the values of the equalized image are directly taken from the normalized cdf to yield the equalized values:

01253321905317453
57321222721920232154
65859323925122765158
73146146247255235154130
97166117231243210117117
117190361461789320170
1302027320125385194
14620613011785166182215

Notice that the minimum value (52) is now 0 and the maximum value (154) is now 255.

JPEG example subimage.svg JPEG example subimage - equalized.svg
OriginalEqualized
Plot for illustrating histogram equalization.svg Histogram equalization.svg
Histogram of Original imageHistogram of Equalized image

Full-sized image

Before Histogram Equalization Unequalized Hawkes Bay NZ.jpg
Before Histogram Equalization
Corresponding histogram (red) and cumulative histogram (black) Unequalized Histogram.svg
Corresponding histogram (red) and cumulative histogram (black)
After Histogram Equalization Equalized Hawkes Bay NZ.jpg
After Histogram Equalization
Corresponding histogram (red) and cumulative histogram (black) Equalized Histogram.svg
Corresponding histogram (red) and cumulative histogram (black)

Advantages and Disadvantages

Advantages

  1. Better-Looking Images: Histogram equalization makes images look better by making the details clearer and improving their overall appearance.
  2. Helps Spot Things: It can help computer programs spot important features in images, which is handy for tasks like medical diagnosis or identifying objects.
  3. Easy to Use: You don't need to be an expert to use histogram equalization. It's straightforward and can be used by beginners in image editing and computer vision.
  4. Saves Time and Resources: Since it works directly on the image pixels, you don't need extra data or complicated setup, making it efficient for real-time applications and devices with limited computing power.

Disadvantages

  1. Can Make Things Look Strange: Sometimes, histogram equalization can make images look odd by exaggerating small details or making textures too prominent, which can be distracting.
  2. Not Good with Local Details: It might not preserve small details well, especially in areas of the image that have both bright and dark parts.
  3. Not Customizable Enough: It treats the whole image the same way, which might not work well for images with specific lighting conditions or unique characteristics.
  4. Tricky to Fine-Tune: Adjusting the settings for histogram equalization can be tricky, and getting it just right often requires a good understanding of image processing.

See also

See also

Related Research Articles

<span class="mw-page-title-main">Cumulative distribution function</span> Probability that random variable X is less than or equal to x

In probability theory and statistics, the cumulative distribution function (CDF) of a real-valued random variable , or just distribution function of , evaluated at , is the probability that will take a value less than or equal to .

A histogram is a visual representation of the distribution of quantitative data. The term was first introduced by Karl Pearson. To construct a histogram, the first step is to "bin" the range of values— divide the entire range of values into a series of intervals—and then count how many values fall into each interval. The bins are usually specified as consecutive, non-overlapping intervals of a variable. The bins (intervals) are adjacent and are typically of equal size.

<span class="mw-page-title-main">Random variable</span> Variable representing a random phenomenon

A random variable is a mathematical formalization of a quantity or object which depends on random events. The term 'random variable' in its mathematical definition refers to neither randomness nor variability but instead is a mathematical function in which

<span class="mw-page-title-main">Order statistic</span> Kth smallest value in a statistical sample

In statistics, the kth order statistic of a statistical sample is equal to its kth-smallest value. Together with rank statistics, order statistics are among the most fundamental tools in non-parametric statistics and inference.

In image processing and photography, a color histogram is a representation of the distribution of colors in an image. For digital images, a color histogram represents the number of pixels that have colors in each of a fixed list of color ranges, that span the image's color space, the set of all possible colors.

The scale-invariant feature transform (SIFT) is a computer vision algorithm to detect, describe, and match local features in images, invented by David Lowe in 1999. Applications include object recognition, robotic mapping and navigation, image stitching, 3D modeling, gesture recognition, video tracking, individual identification of wildlife and match moving.

In probability theory and statistics, the generalized extreme value (GEV) distribution is a family of continuous probability distributions developed within extreme value theory to combine the Gumbel, Fréchet and Weibull families also known as type I, II and III extreme value distributions. By the extreme value theorem the GEV distribution is the only possible limit distribution of properly normalized maxima of a sequence of independent and identically distributed random variables. Note that a limit distribution needs to exist, which requires regularity conditions on the tail of the distribution. Despite this, the GEV distribution is often used as an approximation to model the maxima of long (finite) sequences of random variables.

<span class="mw-page-title-main">Continuous uniform distribution</span> Uniform distribution on an interval

In probability theory and statistics, the continuous uniform distributions or rectangular distributions are a family of symmetric probability distributions. Such a distribution describes an experiment where there is an arbitrary outcome that lies between certain bounds. The bounds are defined by the parameters, and which are the minimum and maximum values. The interval can either be closed or open. Therefore, the distribution is often abbreviated where stands for uniform distribution. The difference between the bounds defines the interval length; all intervals of the same length on the distribution's support are equally probable. It is the maximum entropy probability distribution for a random variable under no constraint other than that it is contained in the distribution's support.

Optical resolution describes the ability of an imaging system to resolve detail, in the object that is being imaged. An imaging system may have many individual components, including one or more lenses, and/or recording and display components. Each of these contributes to the optical resolution of the system; the environment in which the imaging is done often is a further important factor.

<span class="mw-page-title-main">Empirical distribution function</span> Distribution function associated with the empirical measure of a sample

In statistics, an empirical distribution function is the distribution function associated with the empirical measure of a sample. This cumulative distribution function is a step function that jumps up by 1/n at each of the n data points. Its value at any specified value of the measured variable is the fraction of observations of the measured variable that are less than or equal to the specified value.

In image processing, normalization is a process that changes the range of pixel intensity values. Applications include photographs with poor contrast due to glare, for example. Normalization is sometimes called contrast stretching or histogram stretching. In more general fields of data processing, such as digital signal processing, it is referred to as dynamic range expansion.

<span class="mw-page-title-main">Optical transfer function</span> Function that specifies how different spatial frequencies are captured by an optical system

The optical transfer function (OTF) of an optical system such as a camera, microscope, human eye, or projector specifies how different spatial frequencies are captured or transmitted. It is used by optical engineers to describe how the optics project light from the object or scene onto a photographic film, detector array, retina, screen, or simply the next item in the optical transmission chain. A variant, the modulation transfer function (MTF), neglects phase effects, but is equivalent to the OTF in many situations.

The root-mean-square deviation (RMSD) or root-mean-square error (RMSE) is either one of two closely related and frequently used measures of the differences between true or predicted values on the one hand and observed values or an estimator on the other.

<span class="mw-page-title-main">Ordered dithering</span> Image dithering algorithm

Ordered dithering is an image dithering algorithm. It is commonly used to display a continuous image on a display of smaller color depth. For example, Microsoft Windows uses it in 16-color graphics modes. The algorithm is characterized by noticeable crosshatch patterns in the result.

Adaptive histogram equalization (AHE) is a computer image processing technique used to improve contrast in images. It differs from ordinary histogram equalization in the respect that the adaptive method computes several histograms, each corresponding to a distinct section of the image, and uses them to redistribute the lightness values of the image. It is therefore suitable for improving the local contrast and enhancing the definitions of edges in each region of an image.

<span class="mw-page-title-main">Summed-area table</span> Type of data structure algorithm

A summed-area table is a data structure and algorithm for quickly and efficiently generating the sum of values in a rectangular subset of a grid. In the image processing domain, it is also known as an integral image. It was introduced to computer graphics in 1984 by Frank Crow for use with mipmaps. In computer vision it was popularized by Lewis and then given the name "integral image" and prominently used within the Viola–Jones object detection framework in 2001. Historically, this principle is very well known in the study of multi-dimensional probability distribution functions, namely in computing 2D probabilities from the respective cumulative distribution functions.

Computer stereo vision is the extraction of 3D information from digital images, such as those obtained by a CCD camera. By comparing information about a scene from two vantage points, 3D information can be extracted by examining the relative positions of objects in the two panels. This is similar to the biological process of stereopsis.

<span class="mw-page-title-main">Histogram matching</span>

In image processing, histogram matching or histogram specification is the transformation of an image so that its histogram matches a specified histogram. The well-known histogram equalization method is a special case in which the specified histogram is uniformly distributed.

Color normalization is a topic in computer vision concerned with artificial color vision and object recognition. In general, the distribution of color values in an image depends on the illumination, which may vary depending on lighting conditions, cameras, and other factors. Color normalization allows for object recognition techniques based on color to compensate for these variations.

<span class="mw-page-title-main">Plotting algorithms for the Mandelbrot set</span> Algorithms and methods of plotting the Mandelbrot set on a computing device

There are many programs and algorithms used to plot the Mandelbrot set and other fractals, some of which are described in fractal-generating software. These programs use a variety of algorithms to determine the color of individual pixels efficiently.

References

  1. Hum, Yan Chai; Lai, Khin Wee; Mohamad Salim, Maheza Irna (October 11, 2014). "Multiobjectives bihistogram equalization for image contrast enhancement". Complexity. 20 (2): 22–36. Bibcode:2014Cmplx..20b..22H. doi:10.1002/cplx.21499.
  2. Laughlin, S.B (1981). "A simple coding procedure enhances a neuron's information capacity". Z. Naturforsch. 9–10(36):910–2.
  3. Intel Corporation (2001). "Open Source Computer Vision Library Reference Manual" (PDF). Retrieved January 11, 2015.{{cite journal}}: Cite journal requires |journal= (help)
  4. Gonzalez, Rafael C. (2018). Digital image processing. Richard E. Woods (4th ed.). New York, NY: Pearson. pp. 138–140. ISBN   978-1-292-22304-9. OCLC   991765590.
  5. S. Naik and C. Murthy, "Hue-preserving color image enhancement without gamut problem," IEEE Trans. Image Processing, vol. 12, no. 12, pp. 1591–1598, Dec. 2003
  6. P. E. Trahanias and A. N. Venetsanopoulos, "Color image enhancement through 3-D histogram equalization," in Proc. 15th IAPR Int. Conf. Pattern Recognition, vol. 1, pp. 545–548, Aug.-Sep. 1992.
  7. N. Bassiou and C. Kotropoulos, "Color image histogram equalization by absolute discounting back-off," Computer Vision and Image Understanding, vol. 107, no. 1-2, pp.108-122, Jul.-Aug. 2007
  8. Han, Ji-Hee; Yang, Sejung; Lee, Byung-Uk (2011). "A Novel 3-D Color Histogram Equalization Method with Uniform 1-D Gray Scale Histogram". IEEE Transactions on Image Processing. 20 (2): 506–512. Bibcode:2011ITIP...20..506H. doi:10.1109/TIP.2010.2068555. PMID   20801744. S2CID   17972519.