Non-local means

Last updated
Application of non-local means to an image corrupted by Gaussian noise Non-local means denoising example.png
Application of non-local means to an image corrupted by Gaussian noise

Non-local means is an algorithm in image processing for image denoising. Unlike "local mean" filters, which take the mean value of a group of pixels surrounding a target pixel to smooth the image, non-local means filtering takes a mean of all pixels in the image, weighted by how similar these pixels are to the target pixel. This results in much greater post-filtering clarity, and less loss of detail in the image compared with local mean algorithms. [1]

Contents

If compared with other well-known denoising techniques, non-local means adds "method noise" (i.e. error in the denoising process) which looks more like white noise, which is desirable because it is typically less disturbing in the denoised product. [2] Recently non-local means has been extended to other image processing applications such as deinterlacing, [3] view interpolation, [4] and depth maps regularization. [5]

Definition

Suppose is the area of an image, and and are two points within the image. Then, the algorithm is: [6]

where is the filtered value of the image at point , is the unfiltered value of the image at point , is the weighting function, and the integral is evaluated .

is a normalizing factor, given by

Common weighting functions

The purpose of the weighting function, , is to determine how closely related the image at the point is to the image at the point . It can take many forms.

Gaussian

The Gaussian weighting function sets up a normal distribution with a mean, and a variable standard deviation: [7]

where is the filtering parameter (i.e., standard deviation) and is the local mean value of the image point values surrounding .

Discrete algorithm

For an image, , with discrete pixels, a discrete algorithm is required.

where, once again, is the unfiltered value of the image at point . is given by:

Then, for a Gaussian weighting function,

where is given by:

where and is a square region of pixels surrounding and is the number of pixels in the region .

Efficient implementation

The computational complexity of the non-local means algorithm is quadratic in the number of pixels in the image, making it particularly expensive to apply directly. Several techniques were proposed to speed up execution. One simple variant consists of restricting the computation of the mean for each pixel to a search window centred on the pixel itself, instead of the whole image. Another approximation uses summed-area tables and fast Fourier transform to calculate the similarity window between two pixels, speeding up the algorithm by a factor of 50 while preserving comparable quality of the result. [8]

See also

Related Research Articles

Digital image processing is the use of a digital computer to process digital images through an algorithm. As a subcategory or field of digital signal processing, digital image processing has many advantages over analog image processing. It allows a much wider range of algorithms to be applied to the input data and can avoid problems such as the build-up of noise and distortion during processing. Since images are defined over two dimensions digital image processing may be modeled in the form of multidimensional systems. The generation and development of digital image processing are mainly affected by three factors: first, the development of computers; second, the development of mathematics ; third, the demand for a wide range of applications in environment, agriculture, military, industry and medical science has increased.

In signal processing, a finite impulse response (FIR) filter is a filter whose impulse response is of finite duration, because it settles to zero in finite time. This is in contrast to infinite impulse response (IIR) filters, which may have internal feedback and may continue to respond indefinitely.

Noise reduction is the process of removing noise from a signal. Noise reduction techniques exist for audio and images. Noise reduction algorithms may distort the signal to some degree. Noise rejection is the ability of a circuit to isolate an undesired signal component from the desired signal component, as with common-mode rejection ratio.

In mathematics, the total variation identifies several slightly different concepts, related to the (local or global) structure of the codomain of a function or a measure. For a real-valued continuous function f, defined on an interval [a, b] ⊂ R, its total variation on the interval of definition is a measure of the one-dimensional arclength of the curve with parametric equation xf(x), for x ∈ [a, b]. Functions whose total variation is finite are called functions of bounded variation.

<span class="mw-page-title-main">Otsu's method</span> In computer vision and image processing

In computer vision and image processing, Otsu's method, named after Nobuyuki Otsu, is used to perform automatic image thresholding. In the simplest form, the algorithm returns a single intensity threshold that separate pixels into two classes, foreground and background. This threshold is determined by minimizing intra-class intensity variance, or equivalently, by maximizing inter-class variance. Otsu's method is a one-dimensional discrete analogue of Fisher's Discriminant Analysis, is related to Jenks optimization method, and is equivalent to a globally optimal k-means performed on the intensity histogram. The extension to multi-level thresholding was described in the original paper, and computationally efficient implementations have since been proposed.

<span class="mw-page-title-main">Geometry processing</span>

Geometry processing, or mesh processing, is an area of research that uses concepts from applied mathematics, computer science and engineering to design efficient algorithms for the acquisition, reconstruction, analysis, manipulation, simulation and transmission of complex 3D models. As the name implies, many of the concepts, data structures, and algorithms are directly analogous to signal processing and image processing. For example, where image smoothing might convolve an intensity signal with a blur kernel formed using the Laplace operator, geometric smoothing might be achieved by convolving a surface geometry with a blur kernel formed using the Laplace-Beltrami operator.

<span class="mw-page-title-main">Scoring rule</span> Measure for evaluating probabilistic forecasts

In decision theory, a scoring rule provides a summary measure for the evaluation of probabilistic predictions or forecasts. It is applicable to tasks in which predictions assign probabilities to events, i.e. one issues a probability distribution as prediction. This includes probabilistic classification of a set of mutually exclusive outcomes or classes.

The topological derivative is, conceptually, a derivative of a shape functional with respect to infinitesimal changes in its topology, such as adding an infinitesimal hole or crack. When used in higher dimensions than one, the term topological gradient is also used to name the first-order term of the topological asymptotic expansion, dealing only with infinitesimal singular domain perturbations. It has applications in shape optimization, topology optimization, image processing and mechanical modeling.

Compressed sensing is a signal processing technique for efficiently acquiring and reconstructing a signal, by finding solutions to underdetermined linear systems. This is based on the principle that, through optimization, the sparsity of a signal can be exploited to recover it from far fewer samples than required by the Nyquist–Shannon sampling theorem. There are two conditions under which recovery is possible. The first one is sparsity, which requires the signal to be sparse in some domain. The second one is incoherence, which is applied through the isometric property, which is sufficient for sparse signals.

<span class="mw-page-title-main">Bilateral filter</span>

A bilateral filter is a non-linear, edge-preserving, and noise-reducing smoothing filter for images. It replaces the intensity of each pixel with a weighted average of intensity values from nearby pixels. This weight can be based on a Gaussian distribution. Crucially, the weights depend not only on Euclidean distance of pixels, but also on the radiometric differences. This preserves sharp edges.

<span class="mw-page-title-main">Prototype filter</span> Template for electronic filter design

Prototype filters are electronic filter designs that are used as a template to produce a modified filter design for a particular application. They are an example of a nondimensionalised design from which the desired filter can be scaled or transformed. They are most often seen in regard to electronic filters and especially linear analogue passive filters. However, in principle, the method can be applied to any kind of linear filter or signal processing, including mechanical, acoustic and optical filters.

In image processing and computer vision, anisotropic diffusion, also called Perona–Malik diffusion, is a technique aiming at reducing image noise without removing significant parts of the image content, typically edges, lines or other details that are important for the interpretation of the image. Anisotropic diffusion resembles the process that creates a scale space, where an image generates a parameterized family of successively more and more blurred images based on a diffusion process. Each of the resulting images in this family are given as a convolution between the image and a 2D isotropic Gaussian filter, where the width of the filter increases with the parameter. This diffusion process is a linear and space-invariant transformation of the original image. Anisotropic diffusion is a generalization of this diffusion process: it produces a family of parameterized images, but each resulting image is a combination between the original image and a filter that depends on the local content of the original image. As a consequence, anisotropic diffusion is a non-linear and space-variant transformation of the original image.

<span class="mw-page-title-main">Total variation denoising</span>

In signal processing, particularly image processing, total variation denoising, also known as total variation regularization or total variation filtering, is a noise removal process (filter). It is based on the principle that signals with excessive and possibly spurious detail have high total variation, that is, the integral of the absolute image gradient is high. According to this principle, reducing the total variation of the signal—subject to it being a close match to the original signal—removes unwanted detail whilst preserving important details such as edges. The concept was pioneered by L. I. Rudin, S. Osher, and E. Fatemi in 1992 and so is today known as the ROF model.

<span class="mw-page-title-main">Foreground detection</span>

Foreground detection is one of the major tasks in the field of computer vision and image processing whose aim is to detect changes in image sequences. Background subtraction is any technique which allows an image's foreground to be extracted for further processing.

Generalized filtering is a generic Bayesian filtering scheme for nonlinear state-space models. It is based on a variational principle of least action, formulated in generalized coordinates of motion. Note that "generalized coordinates of motion" are related to—but distinct from—generalized coordinates as used in (multibody) dynamical systems analysis. Generalized filtering furnishes posterior densities over hidden states generating observed data using a generalized gradient descent on variational free energy, under the Laplace assumption. Unlike classical filtering, generalized filtering eschews Markovian assumptions about random fluctuations. Furthermore, it operates online, assimilating data to approximate the posterior density over unknown quantities, without the need for a backward pass. Special cases include variational filtering, dynamic expectation maximization and generalized predictive coding.

Kernel methods are a well-established tool to analyze the relationship between input data and the corresponding output of a function. Kernels encapsulate the properties of functions in a computationally efficient way and allow algorithms to easily swap functions of varying complexity.

In image Noise reduction, local pixel grouping is the algorithm to remove noise from images using principal component analysis (PCA).

YaDICs is a program written to perform digital image correlation on 2D and 3D tomographic images. The program was designed to be both modular, by its plugin strategy and efficient, by it multithreading strategy. It incorporates different transformations, optimizing strategy, Global and/or local shape functions ...

A Guided filter is an edge-preserving smoothing light filter. As with a bilateral filter, it can filter out noise or texture while retaining sharp edges.

The Fréchet inception distance (FID) is a metric used to assess the quality of images created by a generative model, like a generative adversarial network (GAN). Unlike the earlier inception score (IS), which evaluates only the distribution of generated images, the FID compares the distribution of generated images with the distribution of a set of real images.

References

  1. Buades, Antoni (20–25 June 2005). "A non-local algorithm for image denoising". 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05). Computer Vision and Pattern Recognition, 2005. Vol. 2. pp. 60–65. CiteSeerX   10.1.1.103.9157 . doi:10.1109/CVPR.2005.38. ISBN   978-0-7695-2372-9. S2CID   11206708.
  2. Buades, Antoni. "On image denoising methods" (PDF). 123 Seminars Only.
  3. Dehghannasiri, R.; Shirani, S. (2012). "A novel de-interlacing method based on locally-adaptive Nonlocal-means". 2012 Conference Record of the Forty Sixth Asilomar Conference on Signals, Systems and Computers (ASILOMAR). pp. 1708–1712. doi:10.1109/ACSSC.2012.6489324. ISBN   978-1-4673-5051-8. S2CID   20709950.
  4. Dehghannasiri, R.; Shirani, S. (2013). "A view interpolation method without explicit disparity estimation". 2013 IEEE International Conference on Multimedia and Expo Workshops (ICMEW). pp. 1–4. doi:10.1109/ICMEW.2013.6618274. ISBN   978-1-4799-1604-7. S2CID   32025000.
  5. Martinello, Manuel; Favaro, Paolo. "Depth Estimation From a Video Sequence with Moving and Deformable Objects" (PDF). IET Image Processing Conference.
  6. Buades, Antoni (2011). "Non-Local Means Denoising". Image Processing on Line. 1: 208–212. doi: 10.5201/ipol.2011.bcm_nlm . S2CID   34599104.
  7. Buades, Antoni. "On image denoising methods (page 10)" (PDF). 123 Seminars Only.
  8. Wang, Jin; Guo, Yanwen; Ying, Yiting; Liu, Yanli; Peng, Qunsheng (2006). "Fast non-local algorithm for image denoising". International Conference on Image Processing. pp. 1429–1432.