Super-resolution imaging

Last updated

Super-resolution imaging (SR) is a class of techniques that enhance (increase) the resolution of an imaging system. In optical SR the diffraction limit of systems is transcended, while in geometrical SR the resolution of digital imaging sensors is enhanced.

Contents

In some radar and sonar imaging applications (e.g. magnetic resonance imaging (MRI), high-resolution computed tomography), subspace decomposition-based methods (e.g. MUSIC [1] ) and compressed sensing-based algorithms (e.g., SAMV [2] ) are employed to achieve SR over standard periodogram algorithm.

Super-resolution imaging techniques are used in general image processing and in super-resolution microscopy.

Basic concepts

Because some of the ideas surrounding super-resolution raise fundamental issues, there is need at the outset to examine the relevant physical and information-theoretical principles:

The technical achievements of enhancing the performance of imaging-forming and –sensing devices now classified as super-resolution use to the fullest but always stay within the bounds imposed by the laws of physics and information theory.

Techniques

Optical or diffractive super-resolution

Substituting spatial-frequency bands: Though the bandwidth allowable by diffraction is fixed, it can be positioned anywhere in the spatial-frequency spectrum. Dark-field illumination in microscopy is an example. See also aperture synthesis.

The "structured illumination" technique of super-resolution is related to moire patterns. The target, a band of fine fringes (top row), is beyond the diffraction limit. When a band of somewhat coarser resolvable fringes (second row) is artificially superimposed, the combination (third row) features moire components that are within the diffraction limit and hence contained in the image (bottom row) allowing the presence of the fine fringes to be inferred even though they are not themselves represented in the image. Structured Illumination Superresolution.png
The "structured illumination" technique of super-resolution is related to moiré patterns. The target, a band of fine fringes (top row), is beyond the diffraction limit. When a band of somewhat coarser resolvable fringes (second row) is artificially superimposed, the combination (third row) features moiré components that are within the diffraction limit and hence contained in the image (bottom row) allowing the presence of the fine fringes to be inferred even though they are not themselves represented in the image.

Multiplexing spatial-frequency bands

An image is formed using the normal passband of the optical device. Then some known light structure, for example a set of light fringes that need not even be within the passband, is superimposed on the target. [8] [9] The image now contains components resulting from the combination of the target and the superimposed light structure, e.g. moiré fringes, and carries information about target detail which simple unstructured illumination does not. The “superresolved” components, however, need disentangling to be revealed. For an example, see structured illumination (figure to left).

Multiple parameter use within traditional diffraction limit

If a target has no special polarization or wavelength properties, two polarization states or non-overlapping wavelength regions can be used to encode target details, one in a spatial-frequency band inside the cut-off limit the other beyond it. Both would use normal passband transmission but are then separately decoded to reconstitute target structure with extended resolution.

Probing near-field electromagnetic disturbance

The usual discussion of super-resolution involved conventional imagery of an object by an optical system. But modern technology allows probing the electromagnetic disturbance within molecular distances of the source [6] which has superior resolution properties, see also evanescent waves and the development of the new super lens.

Geometrical or image-processing super-resolution

Compared to a single image marred by noise during its acquisition or transmission (left), the signal-to-noise ratio is improved by suitable combination of several separately-obtained images (right). This can be achieved only within the intrinsic resolution capability of the imaging process for revealing such detail. Super-resolution example closeup.png
Compared to a single image marred by noise during its acquisition or transmission (left), the signal-to-noise ratio is improved by suitable combination of several separately-obtained images (right). This can be achieved only within the intrinsic resolution capability of the imaging process for revealing such detail.

Multi-exposure image noise reduction

When an image is degraded by noise, there can be more detail in the average of many exposures, even within the diffraction limit. See example on the right.

Single-frame deblurring

Known defects in a given imaging situation, such as defocus or aberrations, can sometimes be mitigated in whole or in part by suitable spatial-frequency filtering of even a single image. Such procedures all stay within the diffraction-mandated passband, and do not extend it.

Both features extend over 3 pixels but in different amounts, enabling them to be localized with precision superior to pixel dimension. Localization Resolution.png
Both features extend over 3 pixels but in different amounts, enabling them to be localized with precision superior to pixel dimension.

Sub-pixel image localization

The location of a single source can be determined by computing the "center of gravity" (centroid) of the light distribution extending over several adjacent pixels (see figure on the left). Provided that there is enough light, this can be achieved with arbitrary precision, very much better than pixel width of the detecting apparatus and the resolution limit for the decision of whether the source is single or double. This technique, which requires the presupposition that all the light comes from a single source, is at the basis of what has become known as super-resolution microscopy, e.g. stochastic optical reconstruction microscopy (STORM), where fluorescent probes attached to molecules give nanoscale distance information. It is also the mechanism underlying visual hyperacuity. [11]

Bayesian induction beyond traditional diffraction limit

Some object features, though beyond the diffraction limit, may be known to be associated with other object features that are within the limits and hence contained in the image. Then conclusions can be drawn, using statistical methods, from the available image data about the presence of the full object. [12] The classical example is Toraldo di Francia's proposition [13] of judging whether an image is that of a single or double star by determining whether its width exceeds the spread from a single star. This can be achieved at separations well below the classical resolution bounds, and requires the prior limitation to the choice "single or double?"

The approach can take the form of extrapolating the image in the frequency domain, by assuming that the object is an analytic function, and that we can exactly know the function values in some interval. This method is severely limited by the ever-present noise in digital imaging systems, but it can work for radar, astronomy, microscopy or magnetic resonance imaging. [14] More recently, a fast single image super-resolution algorithm based on a closed-form solution to problems has been proposed and demonstrated to accelerate most of the existing Bayesian super-resolution methods significantly. [15]

Aliasing

Geometrical SR reconstruction algorithms are possible if and only if the input low resolution images have been under-sampled and therefore contain aliasing. Because of this aliasing, the high-frequency content of the desired reconstruction image is embedded in the low-frequency content of each of the observed images. Given a sufficient number of observation images, and if the set of observations vary in their phase (i.e. if the images of the scene are shifted by a sub-pixel amount), then the phase information can be used to separate the aliased high-frequency content from the true low-frequency content, and the full-resolution image can be accurately reconstructed. [16]

In practice, this frequency-based approach is not used for reconstruction, but even in the case of spatial approaches (e.g. shift-add fusion [17] ), the presence of aliasing is still a necessary condition for SR reconstruction.

Technical implementations

There are many both single-frame and multiple-frame variants of SR. Multiple-frame SR uses the sub-pixel shifts between multiple low resolution images of the same scene. It creates an improved resolution image fusing information from all low resolution images, and the created higher resolution images are better descriptions of the scene. Single-frame SR methods attempt to magnify the image without producing blur. These methods use other parts of the low resolution images, or other unrelated images, to guess what the high-resolution image should look like. Algorithms can also be divided by their domain: frequency or space domain. Originally, super-resolution methods worked well only on grayscale images, [18] but researchers have found methods to adapt them to color camera images. [17] Recently, the use of super-resolution for 3D data has also been shown. [19]

Research

There is promising research on using deep convolutional networks to perform super-resolution. [20] In particular work has been demonstrated showing the transformation of a 20x microscope image of pollen grains into a 1500x scanning electron microscope image using it. [21] While this technique can increase the information content of an image, there is no guarantee that the upscaled features exist in the original image and deep convolutional upscalers should not be used in analytical applications with ambiguous inputs. [22] [23] These methods can hallucinate image features, which can make them unsafe for medical use. [24]

See also

Related Research Articles

<span class="mw-page-title-main">Microscopy</span> Viewing of objects which are too small to be seen with the naked eye

Microscopy is the technical field of using microscopes to view objects and areas of objects that cannot be seen with the naked eye. There are three well-known branches of microscopy: optical, electron, and scanning probe microscopy, along with the emerging field of X-ray microscopy.

<span class="mw-page-title-main">Adaptive optics</span> Technique used in optical systems

Adaptive optics (AO) is a technique of precisely deforming a mirror in order to compensate for light distortion. It is used in astronomical telescopes and laser communication systems to remove the effects of atmospheric distortion, in microscopy, optical fabrication and in retinal imaging systems to reduce optical aberrations. Adaptive optics works by measuring the distortions in a wavefront and compensating for them with a device that corrects those errors such as a deformable mirror or a liquid crystal array.

<span class="mw-page-title-main">Point spread function</span> Response in an optical imaging system

The point spread function (PSF) describes the response of a focused optical imaging system to a point source or point object. A more general term for the PSF is the system's impulse response; the PSF is the impulse response or impulse response function (IRF) of a focused optical imaging system. The PSF in many contexts can be thought of as the extended blob in an image that represents a single point object, that is considered as a spatial impulse. In functional terms, it is the spatial domain version of the optical transfer function (OTF) of an imaging system. It is a useful concept in Fourier optics, astronomical imaging, medical imaging, electron microscopy and other imaging techniques such as 3D microscopy and fluorescence microscopy.

A superlens, or super lens, is a lens which uses metamaterials to go beyond the diffraction limit. The diffraction limit is a feature of conventional lenses and microscopes that limits the fineness of their resolution depending on the illumination wavelength and the numerical aperture (NA) of the objective lens. Many lens designs have been proposed that go beyond the diffraction limit in some way, but constraints and obstacles face each of them.

<span class="mw-page-title-main">Near-field scanning optical microscope</span>

Near-field scanning optical microscopy (NSOM) or scanning near-field optical microscopy (SNOM) is a microscopy technique for nanostructure investigation that breaks the far field resolution limit by exploiting the properties of evanescent waves. In SNOM, the excitation laser light is focused through an aperture with a diameter smaller than the excitation wavelength, resulting in an evanescent field on the far side of the aperture. When the sample is scanned at a small distance below the aperture, the optical resolution of transmitted or reflected light is limited only by the diameter of the aperture. In particular, lateral resolution of 6 nm and vertical resolution of 2–5 nm have been demonstrated.

Nanophotonics or nano-optics is the study of the behavior of light on the nanometer scale, and of the interaction of nanometer-scale objects with light. It is a branch of optics, optical engineering, electrical engineering, and nanotechnology. It often involves dielectric structures such as nanoantennas, or metallic components, which can transport and focus light via surface plasmon polaritons.

Digital holography is the acquisition and processing of holograms with a digital sensor array, typically a CCD camera or a similar device. Image rendering, or reconstruction of object data is performed numerically from digitized interferograms. Digital holography offers a means of measuring optical phase data and typically delivers three-dimensional surface or optical thickness images. Several recording and processing schemes have been developed to assess optical wave characteristics such as amplitude, phase, and polarization state, which make digital holography a very powerful method for metrology applications .

<span class="mw-page-title-main">Vertico spatially modulated illumination</span>

Vertico spatially modulated illumination (Vertico-SMI) is the fastest light microscope for the 3D analysis of complete cells in the nanometer range. It is based on two technologies developed in 1996, SMI and SPDM. The effective optical resolution of this optical nanoscope has reached the vicinity of 5 nm in 2D and 40 nm in 3D, greatly surpassing the λ/2 resolution limit applying to standard microscopy using transmission or reflection of natural light according to the Abbe resolution limit That limit had been determined by Ernst Abbe in 1873 and governs the achievable resolution limit of microscopes using conventional techniques.

Superoscillation is a phenomenon in which a signal which is globally band-limited can contain local segments that oscillate faster than its fastest Fourier components. The idea is originally attributed to Yakir Aharonov, and has been made more popularly known through the work of Michael Berry, who also notes that a similar result was known to Ingrid Daubechies.

Speckle, speckle pattern, or speckle noise designates the granular structure observed in coherent light, resulting from random interference. Speckle patterns are used in a wide range of metrology techniques, as they generally allow high sensitivity and simple setups. They can also be a limiting factor in imaging systems, such as radar, synthetic aperture radar (SAR), medical ultrasound and optical coherence tomography. Speckle is not external noise; rather, it is an inherent fluctuation in diffuse reflections, because the scatterers are not identical for each cell, and the coherent illumination wave is highly sensitive to small variations in phase changes.

Super-resolution microscopy is a series of techniques in optical microscopy that allow such images to have resolutions higher than those imposed by the diffraction limit, which is due to the diffraction of light. Super-resolution imaging techniques rely on the near-field or on the far-field. Among techniques that rely on the latter are those that improve the resolution only modestly beyond the diffraction-limit, such as confocal microscopy with closed pinhole or aided by computational methods such as deconvolution or detector-based pixel reassignment, the 4Pi microscope, and structured-illumination microscopy technologies such as SIM and SMI.

<span class="mw-page-title-main">Digital holographic microscopy</span>

Digital holographic microscopy (DHM) is digital holography applied to microscopy. Digital holographic microscopy distinguishes itself from other microscopy methods by not recording the projected image of the object. Instead, the light wave front information originating from the object is digitally recorded as a hologram, from which a computer calculates the object image by using a numerical reconstruction algorithm. The image forming lens in traditional microscopy is thus replaced by a computer algorithm. Other closely related microscopy methods to digital holographic microscopy are interferometric microscopy, optical coherence tomography and diffraction phase microscopy. Common to all methods is the use of a reference wave front to obtain amplitude (intensity) and phase information. The information is recorded on a digital image sensor or by a photodetector from which an image of the object is created (reconstructed) by a computer. In traditional microscopy, which do not use a reference wave front, only intensity information is recorded and essential information about the object is lost.

The technique of vibrational analysis with scanning probe microscopy allows probing vibrational properties of materials at the submicrometer scale, and even of individual molecules. This is accomplished by integrating scanning probe microscopy (SPM) and vibrational spectroscopy. This combination allows for much higher spatial resolution than can be achieved with conventional Raman/FTIR instrumentation. The technique is also nondestructive, requires non-extensive sample preparation, and provides more contrast such as intensity contrast, polarization contrast and wavelength contrast, as well as providing specific chemical information and topography images simultaneously.

Photo-activated localization microscopy and stochastic optical reconstruction microscopy (STORM) are widefield fluorescence microscopy imaging methods that allow obtaining images with a resolution beyond the diffraction limit. The methods were proposed in 2006 in the wake of a general emergence of optical super-resolution microscopy methods, and were featured as Methods of the Year for 2008 by the Nature Methods journal. The development of PALM as a targeted biophysical imaging method was largely prompted by the discovery of new species and the engineering of mutants of fluorescent proteins displaying a controllable photochromism, such as photo-activatible GFP. However, the concomitant development of STORM, sharing the same fundamental principle, originally made use of paired cyanine dyes. One molecule of the pair, when excited near its absorption maximum, serves to reactivate the other molecule to the fluorescent state.

<span class="mw-page-title-main">Light sheet fluorescence microscopy</span> Fluorescence microscopy technique

Light sheet fluorescence microscopy (LSFM) is a fluorescence microscopy technique with an intermediate-to-high optical resolution, but good optical sectioning capabilities and high speed. In contrast to epifluorescence microscopy only a thin slice of the sample is illuminated perpendicularly to the direction of observation. For illumination, a laser light-sheet is used, i.e. a laser beam which is focused only in one direction. A second method uses a circular beam scanned in one direction to create the lightsheet. As only the actually observed section is illuminated, this method reduces the photodamage and stress induced on a living sample. Also the good optical sectioning capability reduces the background signal and thus creates images with higher contrast, comparable to confocal microscopy. Because light sheet fluorescence microscopy scans samples by using a plane of light instead of a point, it can acquire images at speeds 100 to 1,000 times faster than those offered by point-scanning methods.

Super-resolution photoacoustic imaging is a set of techniques used to enhance spatial resolution in photoacoustic imaging. Specifically, these techniques primarily break the optical diffraction limit of the photoacoustic imaging system. It can be achieved in a variety of mechanisms, such as blind structured illumination, multi-speckle illumination, or photo-imprint photoacoustic microscopy in Figure 1.

<span class="mw-page-title-main">Virtually imaged phased array</span> Dispersive optical device

A virtually imaged phased array (VIPA) is an angular dispersive device that, like a prism or a diffraction grating, splits light into its spectral components. The device works almost independently of polarization. In contrast to prisms or regular diffraction gratings, the VIPA has a much higher angular dispersion but has a smaller free spectral range. This aspect is similar to that of an Echelle grating, since it also uses high diffraction orders. To overcome this disadvantage, the VIPA can be combined with a diffraction grating. The VIPA is a compact spectral disperser with high wavelength resolving power.

Deep learning in photoacoustic imaging

Deep learning in photoacoustic imaging combines the hybrid imaging modality of photoacoustic imaging (PA) with the rapidly evolving field of deep learning. Photoacoustic imaging is based on the photoacoustic effect, in which optical absorption causes a rise in temperature, which causes a subsequent rise in pressure via thermo-elastic expansion. This pressure rise propagates through the tissue and is sensed via ultrasonic transducers. Due to the proportionality between the optical absorption, the rise in temperature, and the rise in pressure, the ultrasound pressure wave signal can be used to quantify the original optical energy deposition within the tissue.

<span class="mw-page-title-main">Video super-resolution</span> Generating high-resolution video frames from given low-resolution ones

Video super-resolution (VSR) is the process of generating high-resolution video frames from the given low-resolution video frames. Unlike single-image super-resolution (SISR), the main goal is not only to restore more fine details while saving coarse ones, but also to preserve motion consistency.

Structured illumination light sheet microscopy (SI-LSM) is an optical imaging technique used for achieving volumetric imaging with high temporal and spatial resolution in all three dimensions. It combines the ability of light sheet microscopy to maintain spatial resolution throughout relatively thick samples with the higher axial and spatial resolution characteristic of structured illumination microscopy. SI-LSM can achieve lateral resolution below 100 nm in biological samples hundreds of micrometers thick.

References

  1. Schmidt, R.O, "Multiple Emitter Location and Signal Parameter Estimation," IEEE Trans. Antennas Propagation, Vol. AP-34 (March 1986), pp.276-280.
  2. Abeida, Habti; Zhang, Qilin; Li, Jian; Merabtine, Nadjim (2013). "Iterative Sparse Asymptotic Minimum Variance Based Approaches for Array Processing" (PDF). IEEE Transactions on Signal Processing. 61 (4): 933–944. arXiv: 1802.03070 . Bibcode:2013ITSP...61..933A. doi:10.1109/tsp.2012.2231676. ISSN   1053-587X. S2CID   16276001.
  3. Born M, Wolf E, Principles of Optics , Cambridge Univ. Press, any edition
  4. Fox M, 2007 Quantum Optics Oxford
  5. Zalevsky Z, Mendlovic D. 2003 Optical Superresolution Springer
  6. 1 2 Betzig, E; Trautman, JK (1992). "Near-field optics: microscopy, spectroscopy, and surface modification beyond the diffraction limit". Science. 257 (5067): 189–195. Bibcode:1992Sci...257..189B. doi:10.1126/science.257.5067.189. PMID   17794749. S2CID   38041885.
  7. Lukosz, W., 1966. Optical systems with resolving power exceeding the classical limit. J. opt. soc. Am. 56, 1463–1472.
  8. 1 2 Guerra, John M. (1995-06-26). "Super-resolution through illumination by diffraction-born evanescent waves". Applied Physics Letters. 66 (26): 3555–3557. Bibcode:1995ApPhL..66.3555G. doi:10.1063/1.113814. ISSN   0003-6951.
  9. 1 2 Gustaffsson, M., 2000. Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy. J. Microscopy 198, 82–87.
  10. Cox, I.J., Sheppard, C.J.R., 1986. Information capacity and resolution in an optical system. J.opt. Soc. Am. A 3, 1152–1158
  11. Westheimer, G (2012). "Optical superresolution and visual hyperacuity". Prog Retin Eye Res. 31 (5): 467–80. doi: 10.1016/j.preteyeres.2012.05.001 . PMID   22634484.
  12. Harris, J.L., 1964. Resolving power and decision making. J. opt. soc. Am. 54, 606–611.
  13. Toraldo di Francia, G., 1955. Resolving power and information. J. opt. soc. Am. 45, 497–501.
  14. D. Poot, B. Jeurissen, Y. Bastiaensen, J. Veraart, W. Van Hecke, P. M. Parizel, and J. Sijbers, "Super-Resolution for Multislice Diffusion Tensor Imaging", Magnetic Resonance in Medicine, (2012)
  15. N. Zhao, Q. Wei, A. Basarab, N. Dobigeon, D. Kouamé and J-Y. Tourneret, "Fast single image super-resolution using a new analytical solution for problems", IEEE Trans. Image Process., 2016, to appear.
  16. J. Simpkins, R.L. Stevenson, "An Introduction to Super-Resolution Imaging." Mathematical Optics: Classical, Quantum, and Computational Methods, Ed. V. Lakshminarayanan, M. Calvo, and T. Alieva. CRC Press, 2012. 539-564.
  17. 1 2 S. Farsiu, D. Robinson, M. Elad, and P. Milanfar, "Fast and Robust Multi-frame Super-resolution", IEEE Transactions on Image Processing, vol. 13, no. 10, pp. 1327–1344, October 2004.
  18. P. Cheeseman, B. Kanefsky, R. Kraft, and J. Stutz, 1994
  19. S. Schuon, C. Theobalt, J. Davis, and S. Thrun, "LidarBoost: Depth Superresolution for ToF 3D Shape Scanning", In Proceedings of IEEE CVPR 2009
  20. Johnson, Justin; Alahi, Alexandre; Fei-Fei, Li (2016-03-26). "Perceptual Losses for Real-Time Style Transfer and Super-Resolution". arXiv: 1603.08155 [cs.CV].
  21. Grant-Jacob, James A; Mackay, Benita S; Baker, James A G; Xie, Yunhui; Heath, Daniel J; Loxham, Matthew; Eason, Robert W; Mills, Ben (2019-06-18). "A neural lens for super-resolution biological imaging". Journal of Physics Communications. 3 (6): 065004. Bibcode:2019JPhCo...3f5004G. doi: 10.1088/2399-6528/ab267d . ISSN   2399-6528.
  22. Blau, Yochai; Michaeli, Tomer (2018). The perception-distortion tradeoff. IEEE Conference on Computer Vision and Pattern Recognition. pp. 6228–6237. arXiv: 1711.06077 . doi:10.1109/CVPR.2018.00652.
  23. Zeeberg, Amos (2023-08-23). "The AI Tools Making Images Look Better". Quanta Magazine. Retrieved 2023-08-28.
  24. Cohen, Joseph Paul; Luck, Margaux; Honari, Sina (2018). "Distribution Matching Losses Can Hallucinate Features in Medical Image Translation". In Alejandro F. Frangi; Julia A. Schnabel; Christos Davatzikos; Carlos Alberola-López; Gabor Fichtinger (eds.). Medical Image Computing and Computer Assisted Intervention – MICCAI 2018. 21st International Conference, Granada, Spain, September 16–20, 2018, Proceedings, Part I. Lecture Notes in Computer Science. Vol. 11070. pp. 529–536. arXiv: 1805.08841 . doi:10.1007/978-3-030-00928-1_60. ISBN   978-3-030-00927-4. S2CID   43919703 . Retrieved 1 May 2022.