Non-local means

Last updated
Application of non-local means to an image corrupted by Gaussian noise Non-local means denoising example.png
Application of non-local means to an image corrupted by Gaussian noise

Non-local means is an algorithm in image processing for image denoising. Unlike "local mean" filters, which take the mean value of a group of pixels surrounding a target pixel to smooth the image, non-local means filtering takes a mean of all pixels in the image, weighted by how similar these pixels are to the target pixel. This results in much greater post-filtering clarity, and less loss of detail in the image compared with local mean algorithms. [1]

Contents

If compared with other well-known denoising techniques, non-local means adds "method noise" (i.e. error in the denoising process) which looks more like white noise, which is desirable because it is typically less disturbing in the denoised product. [2] Recently non-local means has been extended to other image processing applications such as deinterlacing, [3] view interpolation, [4] and depth maps regularization. [5]

Definition

Suppose is the area of an image, and and are two points within the image. Then, the algorithm is: [6]

where is the filtered value of the image at point , is the unfiltered value of the image at point , is the weighting function, and the integral is evaluated .

is a normalizing factor, given by:

Common weighting functions

The purpose of the weighting function, , is to determine how closely related the image at the point is to the image at the point . It can take many forms.

Gaussian

The Gaussian weighting function sets up a normal distribution with a mean, and a variable standard deviation: [7]

where is the filtering parameter (i.e., standard deviation) and is the local mean value of the image point values surrounding .

Discrete algorithm

For an image, , with discrete pixels, a discrete algorithm is required.

where is given by:

Then, for a Gaussian weighting function,

where is given by:

where and is a square region of pixels surrounding and is the number of pixels in the region .

Efficient implementation

The computational complexity of the non-local means algorithm is quadratic in the number of pixels in the image, making it particularly expensive to apply directly. Several techniques were proposed to speed up execution. One simple variant consists of restricting the computation of the mean for each pixel to a search window centred on the pixel itself, instead of the whole image. Another approximation uses summed-area tables and fast Fourier transform to calculate the similarity window between two pixels, speeding up the algorithm by a factor of 50 while preserving comparable quality of the result. [8]

See also

Related Research Articles

Filter design is the process of designing a signal processing filter that satisfies a set of requirements, some of which are contradictory. The purpose is to find a realization of the filter that meets each of the requirements to a sufficient degree to make it useful.

In signal processing, a finite impulse response (FIR) filter is a filter whose impulse response is of finite duration, because it settles to zero in finite time. This is in contrast to infinite impulse response (IIR) filters, which may have internal feedback and may continue to respond indefinitely.

Noise reduction Process of removing noise from a signal

Noise reduction is the process of removing noise from a signal. Noise reduction techniques exist for audio and images. Noise reduction algorithms may distort the signal to some degree.

Total variation

In mathematics, the total variation identifies several slightly different concepts, related to the structure of the codomain of a function or a measure. For a real-valued continuous function f, defined on an interval [a, b] ⊂ ℝ, its total variation on the interval of definition is a measure of the one-dimensional arclength of the curve with parametric equation xf(x), for x ∈ [a, b].

Otsus method In computer vision and image processing

In computer vision and image processing, Otsu's method, named after Nobuyuki Otsu, is used to perform automatic image thresholding. In the simplest form, the algorithm returns a single intensity threshold that separate pixels into two classes, foreground and background. This threshold is determined by minimizing intra-class intensity variance, or equivalently, by maximizing inter-class variance. Otsu's method is a one-dimensional discrete analog of Fisher's Discriminant Analysis, is related to Jenks optimization method, and is equivalent to a globally optimal k-means performed on the intensity histogram. The extension to multi-level thresholding was described in the original paper, and computationally efficient implementations have since been proposed.

Geometry processing

Geometry processing, or mesh processing, is an area of research that uses concepts from applied mathematics, computer science and engineering to design efficient algorithms for the acquisition, reconstruction, analysis, manipulation, simulation and transmission of complex 3D models. As the name implies, many of the concepts, data structures, and algorithms are directly analogous to signal processing and image processing. For example, where image smoothing might convolve an intensity signal with a blur kernel formed using the Laplace operator, geometric smoothing might be achieved by convolving a surface geometry with a blur kernel formed using the Laplace-Beltrami operator.

Caustic (optics)

In optics, a caustic or caustic network is the envelope of light rays reflected or refracted by a curved surface or object, or the projection of that envelope of rays on another surface. The caustic is a curve or surface to which each of the light rays is tangent, defining a boundary of an envelope of rays as a curve of concentrated light. Therefore, in the photo on the side, caustics can be seen as patches of light or their bright edges. These shapes often have cusp singularities.

An autoencoder is a type of artificial neural network used to learn efficient data codings in an unsupervised manner. The aim of an autoencoder is to learn a representation (encoding) for a set of data, typically for dimensionality reduction, by training the network to ignore signal “noise”. Along with the reduction side, a reconstructing side is learnt, where the autoencoder tries to generate from the reduced encoding a representation as close as possible to its original input, hence its name. Variants exist, aiming to force the learned representations to assume useful properties. Examples are regularized autoencoders, which are effective in learning representations for subsequent classification tasks, and Variational autoencoders, with applications as generative models. Autoencoders are applied to many problems, from facial recognition to acquiring the semantic meaning of words.

The topological derivative is, conceptually, a derivative of a shape functional with respect to infinitesimal changes in its topology, such as adding an infinitesimal hole or crack. When used in higher dimensions than one, the term topological gradient is also used to name the first-order term of the topological asymptotic expansion, dealing only with infinitesimal singular domain perturbations. It has applications in shape optimization, topology optimization, image processing and mechanical modeling.

Compressed sensing is a signal processing technique for efficiently acquiring and reconstructing a signal, by finding solutions to underdetermined linear systems. This is based on the principle that, through optimization, the sparsity of a signal can be exploited to recover it from far fewer samples than required by the Nyquist–Shannon sampling theorem. There are two conditions under which recovery is possible. The first one is sparsity, which requires the signal to be sparse in some domain. The second one is incoherence, which is applied through the isometric property, which is sufficient for sparse signals.

Bilateral filter

A bilateral filter is a non-linear, edge-preserving, and noise-reducing smoothing filter for images. It replaces the intensity of each pixel with a weighted average of intensity values from nearby pixels. This weight can be based on a Gaussian distribution. Crucially, the weights depend not only on Euclidean distance of pixels, but also on the radiometric differences. This preserves sharp edges.

Prototype filter

Prototype filters are electronic filter designs that are used as a template to produce a modified filter design for a particular application. They are an example of a nondimensionalised design from which the desired filter can be scaled or transformed. They are most often seen in regard to electronic filters and especially linear analogue passive filters. However, in principle, the method can be applied to any kind of linear filter or signal processing, including mechanical, acoustic and optical filters.

Total variation denoising

In signal processing, total variation denoising, also known as total variation regularization, is a process, most often used in digital image processing, that has applications in noise removal. It is based on the principle that signals with excessive and possibly spurious detail have high total variation, that is, the integral of the absolute gradient of the signal is high. According to this principle, reducing the total variation of the signal subject to it being a close match to the original signal, removes unwanted detail whilst preserving important details such as edges. The concept was pioneered by Rudin, Osher, and Fatemi in 1992 and so is today known as the ROF model.

Foreground detection

Foreground detection is one of the major tasks in the field of computer vision and image processing whose aim is to detect changes in image sequences. Background subtraction is any technique which allows an image's foreground to be extracted for further processing.

Generalized filtering is a generic Bayesian filtering scheme for nonlinear state-space models. It is based on a variational principle of least action, formulated in generalized coordinates. Note that the concept of "generalized coordinates" as used here differs from the concept of generalized coordinates of motion as used in (multibody) dynamical systems analysis. Generalized filtering furnishes posterior densities over hidden states generating observed data using a generalized gradient descent on variational free energy, under the Laplace assumption. Unlike classical filtering, generalized filtering eschews Markovian assumptions about random fluctuations. Furthermore, it operates online, assimilating data to approximate the posterior density over unknown quantities, without the need for a backward pass. Special cases include variational filtering, dynamic expectation maximization and generalized predictive coding.

Kernel methods are a well-established tool to analyze the relationship between input data and the corresponding output of a function. Kernels encapsulate the properties of functions in a computationally efficient way and allow algorithms to easily swap functions of varying complexity.

Suffix automaton Deterministic finite automaton accepting set of all suffixes of particular string

In computer science, a suffix automaton is an efficient data structure for representing the substring index of a given string which allows the storage, processing, and retrieval of compressed information about all its substrings. The suffix automaton of a string is the smallest directed acyclic graph with a dedicated initial vertex and a set of "final" vertices, such that paths from the initial vertex to final vertices represent the suffixes of the string. Formally speaking, a suffix automaton is defined by the following set of properties:

  1. Its arcs are tagged with letters;
  2. none of its nodes have two outgoing arcs tagged with the same letter;
  3. for every suffix of there exists a path from initial vertex to some final vertex such that the concatenation of letters on the path forms this suffix;
  4. it has the least number of vertices among all graphs defined by the properties above.

In image Noise reduction, local pixel grouping is the algorithm to remove noise from images using principal component analysis (PCA).

YaDICs is a program written to perform digital image correlation on 2D and 3D tomographic images. The program was designed to be both modular, by its plugin strategy and efficient, by it multithreading strategy. It incorporates different transformations, optimizing strategy, Global and/or local shape functions ...


Guided filter is a kind of edge-preserving smoothing filter. Same as bilateral filter, this image filter can also filter out noise or texture while retaining sharp edges.

References

  1. Buades, Antoni (20–25 June 2005). A non-local algorithm for image denoising. Computer Vision and Pattern Recognition, 2005. 2. pp. 60–65. CiteSeerX   10.1.1.103.9157 . doi:10.1109/CVPR.2005.38. ISBN   978-0-7695-2372-9.
  2. Buades, Antoni. "On image denoising methods" (PDF). 123 Seminars Only.
  3. Dehghannasiri, R.; Shirani, S. (2012). "A novel de-interlacing method based on locally-adaptive Nonlocal-means". 2012 Conference Record of the Forty Sixth Asilomar Conference on Signals, Systems and Computers (ASILOMAR). pp. 1708–1712. doi:10.1109/ACSSC.2012.6489324. ISBN   978-1-4673-5051-8.
  4. Dehghannasiri, R.; Shirani, S. (2013). "A view interpolation method without explicit disparity estimation". 2013 IEEE International Conference on Multimedia and Expo Workshops (ICMEW). pp. 1–4. doi:10.1109/ICMEW.2013.6618274. ISBN   978-1-4799-1604-7.
  5. Martinello, Manuel; Favaro, Paolo. "Depth Estimation From a Video Sequence with Moving and Deformable Objects" (PDF). IET Image Processing Conference.
  6. Buades, Antoni. "Non-Local Means Denoising". Image Processing On Line.
  7. Buades, Antoni. "On image denoising methods (page 10)" (PDF). 123 Seminars Only.
  8. Wang, Jin; Guo, Yanwen; Ying, Yiting; Liu, Yanli; Peng, Qunsheng (2006). "Fast non-local algorithm for image denoising". International Conference on Image Processing. pp. 1429–1432.