Blind deconvolution

Last updated

In electrical engineering and applied mathematics, blind deconvolution is deconvolution without explicit knowledge of the impulse response function used in the convolution. This is usually achieved by making appropriate assumptions of the input to estimate the impulse response by analyzing the output. Blind deconvolution is not solvable without making assumptions on input and impulse response. Most of the algorithms to solve this problem are based on assumption that both input and impulse response live in respective known subspaces. However, blind deconvolution remains a very challenging non-convex optimization problem even with this assumption.

Contents

Top left image: NGC224 by Hubble Space Telescope. Top right contour: best fit of the point spread function (PSF) (a priori). Middle left image: Deconvolution by maximum a posteriori estimation (MAP), the 2nd iteration. Middle right contour: Estimate of the PSF by MAP, the 2nd iteration. Bottom left image: Deconvolution by MAP, the final result. Bottom right contour: Estimate of the PSF by MAP, the final result. Blind deconvolution illustration.png
Top left image: NGC224 by Hubble Space Telescope. Top right contour: best fit of the point spread function (PSF) (a priori). Middle left image: Deconvolution by maximum a posteriori estimation (MAP), the 2nd iteration. Middle right contour: Estimate of the PSF by MAP, the 2nd iteration. Bottom left image: Deconvolution by MAP, the final result. Bottom right contour: Estimate of the PSF by MAP, the final result.

In image processing

In image processing, blind deconvolution is a deconvolution technique that permits recovery of the target scene from a single or set of "blurred" images in the presence of a poorly determined or unknown point spread function (PSF). [2] Regular linear and non-linear deconvolution techniques utilize a known PSF. For blind deconvolution, the PSF is estimated from the image or image set, allowing the deconvolution to be performed. Researchers have been studying blind deconvolution methods for several decades, and have approached the problem from different directions.

Most of the work on blind deconvolution started in early 1970s. Blind deconvolution is used in astronomical imaging and medical imaging.

Blind deconvolution can be performed iteratively, whereby each iteration improves the estimation of the PSF and the scene, or non-iteratively, where one application of the algorithm, based on exterior information, extracts the PSF. Iterative methods include maximum a posteriori estimation and expectation-maximization algorithms. A good estimate of the PSF is helpful for quicker convergence but not necessary.

Examples of non-iterative techniques include SeDDaRA, [3] the cepstrum transform and APEX. The cepstrum transform and APEX methods assume that the PSF has a specific shape, and one must estimate the width of the shape. For SeDDaRA, the information about the scene is provided in the form of a reference image. The algorithm estimates the PSF by comparing the spatial frequency information in the blurred image to that of the target image.

Examples

Any blurred image can be given as input to blind deconvolution algorithm, it can deblur the image, but essential condition for working of this algorithm must not be violated as discussed above. In the first example (picture of shapes), recovered image was very fine, exactly similar to original image because L > K + N. In the second example (picture of a girl), L < K + N, so essential condition is violated, hence recovered image is far different from original image.

Blurred Image, obtained by convolution of original image with blur kernel. Input image lies in fixed subspace of wavelet transform and blur kernel lies in random subspace. Blurred img.png
Blurred Image, obtained by convolution of original image with blur kernel. Input image lies in fixed subspace of wavelet transform and blur kernel lies in random subspace.

In signal processing

Seismic data

In the case of deconvolution of seismic data, the original unknown signal is made of spikes hence is possible to characterize with sparsity constraints [4] or regularizations such as l1 norm/l2 norm norm ratios, [5] suggested by W. C. Gray in 1978. [6]

Audio deconvolution

Audio deconvolution (often referred to as dereverberation) is a reverberation reduction in audio mixtures. It is part of audio processing of recordings in ill-posed cases such as the cocktail party effect. One possibility is to use ICA. [7]

In general

Suppose we have a signal transmitted through a channel. The channel can usually be modeled as a linear shift-invariant system, so the receptor receives a convolution of the original signal with the impulse response of the channel. If we want to reverse the effect of the channel, to obtain the original signal, we must process the received signal by a second linear system, inverting the response of the channel. This system is called an equalizer.

Recovered image after applying algorithm of blind deconvolution. This algorithm basically solves optimization problem using nuclear norm minimization. L=65536, K=65 and N=44838, Recovered img.png
Recovered image after applying algorithm of blind deconvolution. This algorithm basically solves optimization problem using nuclear norm minimization. L=65536, K=65 and N=44838,

If we are given the original signal, we can use a supervising technique, such as finding a Wiener filter, but without it, we can still explore what we do know about it to attempt its recovery. For example, we can filter the received signal to obtain the desired spectral power density. This is what happens, for example, when the original signal is known to have no auto correlation, and we "whiten" the received signal.

Whitening usually leaves some phase distortion in the results. Most blind deconvolution techniques use higher-order statistics of the signals, and permit the correction of such phase distortions. We can optimize the equalizer to obtain a signal with a PSF approximating what we know about the original PSF.

Original image Original img.png
Original image
Blurred image: obtained after the convolution of original image with blur kernel. Original image lies in fixed subspace of wavelet transform and blur lies in random subspace. L=65536, K=200, N=65400 Blur img.png
Blurred image: obtained after the convolution of original image with blur kernel. Original image lies in fixed subspace of wavelet transform and blur lies in random subspace. L=65536, K=200, N=65400
Recovered image: very different from original image, because essential condition for the algorithm of blind deconvolution using nuclear norm minimization is violated. L=65536, K=200, N=65400 Rcvrd img.png
Recovered image: very different from original image, because essential condition for the algorithm of blind deconvolution using nuclear norm minimization is violated. L=65536, K=200, N=65400

High-order statistics

Blind deconvolution algorithms often make use of high-order statistics, with moments higher than two. This can be implicit or explicit. [8]

See also

Related Research Articles

Digital signal processing (DSP) is the use of digital processing, such as by computers or more specialized digital signal processors, to perform a wide variety of signal processing operations. The digital signals processed in this manner are a sequence of numbers that represent samples of a continuous variable in a domain such as time, space, or frequency. In digital electronics, a digital signal is represented as a pulse train, which is typically generated by the switching of a transistor.

Filter design is the process of designing a signal processing filter that satisfies a set of requirements, some of which may be conflicting. The purpose is to find a realization of the filter that meets each of the requirements to a sufficient degree to make it useful.

An inverse problem in science is the process of calculating from a set of observations the causal factors that produced them: for example, calculating an image in X-ray computed tomography, source reconstruction in acoustics, or calculating the density of the Earth from measurements of its gravity field. It is called an inverse problem because it starts with the effects and then calculates the causes. It is the inverse of a forward problem, which starts with the causes and then calculates the effects.

<span class="mw-page-title-main">Deconvolution</span> Reconstruction of a filtered signal

In mathematics, deconvolution is the operation inverse to convolution. Both operations are used in signal processing and image processing. For example, it may be possible to recover the original signal after a filter (convolution) by using a deconvolution method with a certain degree of accuracy. Due to the measurement error of the recorded signal or image, it can be demonstrated that the worse the SNR, the worse the reversing of a filter will be; hence, inverting a filter is not always a good solution as the error amplifies. Deconvolution offers a solution to this problem.

Homomorphic filtering is a generalized technique for signal and image processing, involving a nonlinear mapping to a different domain in which linear filter techniques are applied, followed by mapping back to the original domain. This concept was developed in the 1960s by Thomas Stockham, Alan V. Oppenheim, and Ronald W. Schafer at MIT and independently by Bogert, Healy, and Tukey in their study of time series.

<span class="mw-page-title-main">Point spread function</span> Response in an optical imaging system

The point spread function (PSF) describes the response of a focused optical imaging system to a point source or point object. A more general term for the PSF is the system's impulse response; the PSF is the impulse response or impulse response function (IRF) of a focused optical imaging system. The PSF in many contexts can be thought of as the extended blob in an image that represents a single point object, that is considered as a spatial impulse. In functional terms, it is the spatial domain version of the optical transfer function (OTF) of an imaging system. It is a useful concept in Fourier optics, astronomical imaging, medical imaging, electron microscopy and other imaging techniques such as 3D microscopy and fluorescence microscopy.

<span class="mw-page-title-main">Iterative reconstruction</span>

Iterative reconstruction refers to iterative algorithms used to reconstruct 2D and 3D images in certain imaging techniques. For example, in computed tomography an image must be reconstructed from projections of an object. Here, iterative reconstruction techniques are usually a better, but computationally more expensive alternative to the common filtered back projection (FBP) method, which directly calculates the image in a single reconstruction step. In recent research works, scientists have shown that extremely fast computations and massive parallelism is possible for iterative reconstruction, which makes iterative reconstruction practical for commercialization.

<span class="mw-page-title-main">Filter bank</span> Tool for Digital Signal Processing

In signal processing, a filter bank is an array of bandpass filters that separates the input signal into multiple components, each one carrying a single frequency sub-band of the original signal. One application of a filter bank is a graphic equalizer, which can attenuate the components differently and recombine them into a modified version of the original signal. The process of decomposition performed by the filter bank is called analysis ; the output of analysis is referred to as a subband signal with as many subbands as there are filters in the filter bank. The reconstruction process is called synthesis, meaning reconstitution of a complete signal resulting from the filtering process.

<span class="mw-page-title-main">Regularization (mathematics)</span> Technique to make a model more generalizable and transferable

In mathematics, statistics, finance, computer science, particularly in machine learning and inverse problems, regularization is a process that changes the result answer to be "simpler". It is often used to obtain results for ill-posed problems or to prevent overfitting.

The Richardson–Lucy algorithm, also known as Lucy–Richardson deconvolution, is an iterative procedure for recovering an underlying image that has been blurred by a known point spread function. It was named after William Richardson and Leon B. Lucy, who described it independently.

Blind equalization is a digital signal processing technique in which the transmitted signal is inferred (equalized) from the received signal, while making use only of the transmitted signal statistics. Hence, the use of the word blind in the name.

Compressed sensing is a signal processing technique for efficiently acquiring and reconstructing a signal, by finding solutions to underdetermined linear systems. This is based on the principle that, through optimization, the sparsity of a signal can be exploited to recover it from far fewer samples than required by the Nyquist–Shannon sampling theorem. There are two conditions under which recovery is possible. The first one is sparsity, which requires the signal to be sparse in some domain. The second one is incoherence, which is applied through the isometric property, which is sufficient for sparse signals.

The method of iteratively reweighted least squares (IRLS) is used to solve certain optimization problems with objective functions of the form of a p-norm:

Sparse approximation theory deals with sparse solutions for systems of linear equations. Techniques for finding these solutions and exploiting them in applications have found wide use in image processing, signal processing, machine learning, medical imaging, and more.

<span class="mw-page-title-main">Total variation denoising</span> Noise removal process during image processing

In signal processing, particularly image processing, total variation denoising, also known as total variation regularization or total variation filtering, is a noise removal process (filter). It is based on the principle that signals with excessive and possibly spurious detail have high total variation, that is, the integral of the absolute image gradient is high. According to this principle, reducing the total variation of the signal—subject to it being a close match to the original signal—removes unwanted detail whilst preserving important details such as edges. The concept was pioneered by L. I. Rudin, S. Osher, and E. Fatemi in 1992 and so is today known as the ROF model.

The Landweber iteration or Landweber algorithm is an algorithm to solve ill-posed linear inverse problems, and it has been extended to solve non-linear problems that involve constraints. The method was first proposed in the 1950s by Louis Landweber, and it can be now viewed as a special case of many other more general methods.

<span class="mw-page-title-main">Manifold regularization</span>

In machine learning, Manifold regularization is a technique for using the shape of a dataset to constrain the functions that should be learned on that dataset. In many machine learning problems, the data to be learned do not cover the entire input space. For example, a facial recognition system may not need to classify any possible image, but only the subset of images that contain faces. The technique of manifold learning assumes that the relevant subset of data comes from a manifold, a mathematical structure with useful properties. The technique also assumes that the function to be learned is smooth: data with different labels are not likely to be close together, and so the labeling function should not change quickly in areas where there are likely to be many data points. Because of this assumption, a manifold regularization algorithm can use unlabeled data to inform where the learned function is allowed to change quickly and where it is not, using an extension of the technique of Tikhonov regularization. Manifold regularization algorithms can extend supervised learning algorithms in semi-supervised learning and transductive learning settings, where unlabeled data are available. The technique has been used for applications including medical imaging, geographical imaging, and object recognition.

<span class="mw-page-title-main">Sparse dictionary learning</span> Representation learning method

Sparse dictionary learning is a representation learning method which aims at finding a sparse representation of the input data in the form of a linear combination of basic elements as well as those basic elements themselves. These elements are called atoms and they compose a dictionary. Atoms in the dictionary are not required to be orthogonal, and they may be an over-complete spanning set. This problem setup also allows the dimensionality of the signals being represented to be higher than the one of the signals being observed. The above two properties lead to having seemingly redundant atoms that allow multiple representations of the same signal but also provide an improvement in sparsity and flexibility of the representation.

In multidimensional signal processing, Multidimensional signal restoration refers to the problem of estimating the original input signal from observations of the distorted or noise contaminated version of the original signal using some prior information about the input signal and /or the distortion process. Multidimensional signal processing systems such as audio, image and video processing systems often receive as input, signals that undergo distortions like blurring, band-limiting etc. during signal acquisition or transmission and it may be vital to recover the original signal for further filtering. Multidimensional signal restoration is an inverse problem, where only the distorted signal is observed and some information about the distortion process and/or input signal properties is known. A general class of iterative methods have been developed for the multidimensional restoration problem with successful applications to multidimensional deconvolution, signal extrapolation and denoising.

Super-resolution photoacoustic imaging is a set of techniques used to enhance spatial resolution in photoacoustic imaging. Specifically, these techniques primarily break the optical diffraction limit of the photoacoustic imaging system. It can be achieved in a variety of mechanisms, such as blind structured illumination, multi-speckle illumination, or photo-imprint photoacoustic microscopy in Figure 1.

References


  1. Barmby, Pauline; McLaughlin, Dean E.; Harris, William E.; Harris, Gretchen L. H.; Forbes, Duncan A. (2007). "Structural Parameters for Globular Clusters in M31 and Generalizations for the Fundamental Plane" (PDF). The Astronomical Journal . 133 (6): 2764–2786. arXiv: 0704.2057 . Bibcode:2007AJ....133.2764B. doi:10.1086/516777. S2CID   58913061.
  2. Lam, Edmund Y.; Goodman, Joseph W. (2000). "Iterative statistical approach to blind image deconvolution". Journal of the Optical Society of America A . 17 (7): 1177–1184. Bibcode:2000JOSAA..17.1177L. doi:10.1364/JOSAA.17.001177. PMID   10883969.
  3. Caron, James N.; Namazi, Nader M.; Rollins, Chris J. (2002). "Noniterative blind data restoration by use of an extracted filter function". Applied Optics. 41 (32): 6884–9. Bibcode:2002ApOpt..41.6884C. doi:10.1364/AO.41.006884. PMID   12440543.
  4. Broadhead, Michael (2010). "Sparse seismic deconvolution by method of orthogonal matching pursuit".{{cite journal}}: Cite journal requires |journal= (help)
  5. Barmby, P.; McLaughlin, D. E.; Harris, W. E.; Harris, G. L. H.; Forbes, D. A. (2015). "Euclid in a Taxicab: Sparse Blind Deconvolution with Smoothed l1/l2 Regularization". IEEE Signal Processing Letters. 22 (5): 539–543. arXiv: 1407.5465 . Bibcode:2015ISPL...22..539R. doi:10.1109/LSP.2014.2362861. S2CID   9605797.
  6. Gray, W. C. (1978). "Variable norm deconvolution" (PDF). Archived from the original (PDF) on 2015-04-09.{{cite journal}}: Cite journal requires |journal= (help)
  7. Koldovsky, Zbynek; Tichavsky, Petr (2007). "Time-domain blind audio source separation using advanced ICA methods". The Proceedings of the 8th Annual Conference of the International Speech Communication Association (Interspeech 2007). pp. 846–849.
  8. Cardoso, J.-F. (1991). "Super-symmetric decomposition of the fourth-order cumulant tensor. Blind identification of more sources than sensors". [Proceedings] ICASSP 91: 1991 International Conference on Acoustics, Speech, and Signal Processing. Vol. 5. pp. 3109–3112. CiteSeerX   10.1.1.8.9380 . doi:10.1109/ICASSP.1991.150113. ISBN   978-0-7803-0003-3. S2CID   7972548.