Point spread function

Last updated
Image formation in a confocal microscope: central longitudinal (XZ) slice. The 3D acquired distribution arises from the convolution of the real light sources with the PSF. Convolution Illustrated eng.png
Image formation in a confocal microscope: central longitudinal (XZ) slice. The 3D acquired distribution arises from the convolution of the real light sources with the PSF.
A point source as imaged by a system with negative (top), zero (center), and positive (bottom) spherical aberration. Images to the left are defocused toward the inside, images on the right toward the outside. Spherical-aberration-disk.jpg
A point source as imaged by a system with negative (top), zero (center), and positive (bottom) spherical aberration. Images to the left are defocused toward the inside, images on the right toward the outside.

The point spread function (PSF) describes the response of a focused optical imaging system to a point source or point object. A more general term for the PSF is the system's impulse response; the PSF is the impulse response or impulse response function (IRF) of a focused optical imaging system. The PSF in many contexts can be thought of as the extended blob in an image that represents a single point object, that is considered as a spatial impulse. In functional terms, it is the spatial domain version (i.e., the inverse Fourier transform) of the optical transfer function (OTF) of an imaging system. It is a useful concept in Fourier optics, astronomical imaging, medical imaging, electron microscopy and other imaging techniques such as 3D microscopy (like in confocal laser scanning microscopy) and fluorescence microscopy.

Contents

The degree of spreading (blurring) in the image of a point object for an imaging system is a measure of the quality of the imaging system. In non-coherent imaging systems, such as fluorescent microscopes, telescopes or optical microscopes, the image formation process is linear in the image intensity and described by a linear system theory. This means that when two objects A and B are imaged simultaneously by a non-coherent imaging system, the resulting image is equal to the sum of the independently imaged objects. In other words: the imaging of A is unaffected by the imaging of B and vice versa, owing to the non-interacting property of photons. In space-invariant systems, i.e. those in which the PSF is the same everywhere in the imaging space, the image of a complex object is then the convolution of that object and the PSF. The PSF can be derived from diffraction integrals. [1]

Introduction

By virtue of the linearity property of optical non-coherent imaging systems, i.e.,

Image(Object1 + Object2) = Image(Object1) + Image(Object2)

the image of an object in a microscope or telescope as a non-coherent imaging system can be computed by expressing the object-plane field as a weighted sum of 2D impulse functions, and then expressing the image plane field as a weighted sum of the images of these impulse functions. This is known as the superposition principle, valid for linear systems. The images of the individual object-plane impulse functions are called point spread functions (PSF), reflecting the fact that a mathematical point of light in the object plane is spread out to form a finite area in the image plane. (In some branches of mathematics and physics, these might be referred to as Green's functions or impulse response functions. PSFs are considered impulse response functions for imaging systems.

Application of PSF: Deconvolution of the mathematically modeled PSF and the low-resolution image enhances the resolution. PSF Deconvolution V.png
Application of PSF: Deconvolution of the mathematically modeled PSF and the low-resolution image enhances the resolution.

When the object is divided into discrete point objects of varying intensity, the image is computed as a sum of the PSF of each point. As the PSF is typically determined entirely by the imaging system (that is, microscope or telescope), the entire image can be described by knowing the optical properties of the system. This imaging process is usually formulated by a convolution equation. In microscope image processing and astronomy, knowing the PSF of the measuring device is very important for restoring the (original) object with deconvolution. For the case of laser beams, the PSF can be mathematically modeled using the concepts of Gaussian beams. [3] For instance, deconvolution of the mathematically modeled PSF and the image, improves visibility of features and removes imaging noise. [2]

Theory

The point spread function may be independent of position in the object plane, in which case it is called shift invariant. In addition, if there is no distortion in the system, the image plane coordinates are linearly related to the object plane coordinates via the magnification M as:

.

If the imaging system produces an inverted image, we may simply regard the image plane coordinate axes as being reversed from the object plane axes. With these two assumptions, i.e., that the PSF is shift-invariant and that there is no distortion, calculating the image plane convolution integral is a straightforward process.

Mathematically, we may represent the object plane field as:

i.e., as a sum over weighted impulse functions, although this is also really just stating the sifting property of 2D delta functions (discussed further below). Rewriting the object transmittance function in the form above allows us to calculate the image plane field as the superposition of the images of each of the individual impulse functions, i.e., as a superposition over weighted point spread functions in the image plane using the same weighting function as in the object plane, i.e., . Mathematically, the image is expressed as:

in which is the image of the impulse function .

The 2D impulse function may be regarded as the limit (as side dimension w tends to zero) of the "square post" function, shown in the figure below.

Square Post Function SquarePost.svg
Square Post Function

We imagine the object plane as being decomposed into square areas such as this, with each having its own associated square post function. If the height, h, of the post is maintained at 1/w2, then as the side dimension w tends to zero, the height, h, tends to infinity in such a way that the volume (integral) remains constant at 1. This gives the 2D impulse the sifting property (which is implied in the equation above), which says that when the 2D impulse function, δ(x  u,y  v), is integrated against any other continuous function, f(u,v), it "sifts out" the value of f at the location of the impulse, i.e., at the point (x,y).

The concept of a perfect point source object is central to the idea of PSF. However, there is no such thing in nature as a perfect mathematical point source radiator; the concept is completely non-physical and is rather a mathematical construct used to model and understand optical imaging systems. The utility of the point source concept comes from the fact that a point source in the 2D object plane can only radiate a perfect uniform-amplitude, spherical wave — a wave having perfectly spherical, outward travelling phase fronts with uniform intensity everywhere on the spheres (see Huygens–Fresnel principle). Such a source of uniform spherical waves is shown in the figure below. We also note that a perfect point source radiator will not only radiate a uniform spectrum of propagating plane waves, but a uniform spectrum of exponentially decaying (evanescent) waves as well, and it is these which are responsible for resolution finer than one wavelength (see Fourier optics). This follows from the following Fourier transform expression for a 2D impulse function,

Truncation of Spherical Wave by Lens PSF.svg
Truncation of Spherical Wave by Lens

The quadratic lens intercepts a portion of this spherical wave, and refocuses it onto a blurred point in the image plane. For a single lens, an on-axis point source in the object plane produces an Airy disc PSF in the image plane. It can be shown (see Fourier optics, Huygens–Fresnel principle, Fraunhofer diffraction) that the field radiated by a planar object (or, by reciprocity, the field converging onto a planar image) is related to its corresponding source (or image) plane distribution via a Fourier transform (FT) relation. In addition, a uniform function over a circular area (in one FT domain) corresponds to J1(x)/x in the other FT domain, where J1(x) is the first-order Bessel function of the first kind. That is, a uniformly-illuminated circular aperture that passes a converging uniform spherical wave yields an Airy disk image at the focal plane. A graph of a sample Airy disk is shown in the adjoining figure.

Airy disk Airy-3d.svg
Airy disk

Therefore, the converging (partial) spherical wave shown in the figure above produces an Airy disc in the image plane. The argument of the function J1(x)/x is important, because this determines the scaling of the Airy disc (in other words, how big the disc is in the image plane). If Θmax is the maximum angle that the converging waves make with the lens axis, r is radial distance in the image plane, and wavenumber k = 2π/λ where λ = wavelength, then the argument of the function is: kr tan(Θmax). If Θmax is small (only a small portion of the converging spherical wave is available to form the image), then radial distance, r, has to be very large before the total argument of the function moves away from the central spot. In other words, if Θmax is small, the Airy disc is large (which is just another statement of Heisenberg's uncertainty principle for Fourier Transform pairs, namely that small extent in one domain corresponds to wide extent in the other domain, and the two are related via the space-bandwidth product ). By virtue of this, high magnification systems, which typically have small values of Θmax (by the Abbe sine condition), can have more blur in the image, owing to the broader PSF. The size of the PSF is proportional to the magnification, so that the blur is no worse in a relative sense, but it is definitely worse in an absolute sense.

The figure above illustrates the truncation of the incident spherical wave by the lens. In order to measure the point spread function — or impulse response function — of the lens, a perfect point source that radiates a perfect spherical wave in all directions of space is not needed. This is because the lens has only a finite (angular) bandwidth, or finite intercept angle. Therefore, any angular bandwidth contained in the source, which extends past the edge angle of the lens (i.e., lies outside the bandwidth of the system), is essentially wasted source bandwidth because the lens can't intercept it in order to process it. As a result, a perfect point source is not required in order to measure a perfect point spread function. All we need is a light source which has at least as much angular bandwidth as the lens being tested (and of course, is uniform over that angular sector). In other words, we only require a point source which is produced by a convergent (uniform) spherical wave whose half angle is greater than the edge angle of the lens.

Due to intrinsic limited resolution of the imaging systems, measured PSFs are not free of uncertainty. [4] In imaging, it is desired to suppress the side-lobes of the imaging beam by apodization techniques. In the case of transmission imaging systems with Gaussian beam distribution, the PSF is modeled by the following equation: [5]

where k-factor depends on the truncation ratio and level of the irradiance, NA is numerical aperture, c is the speed of light, f is the photon frequency of the imaging beam, Ir is the intensity of reference beam, a is an adjustment factor and is the radial position from the center of the beam on the corresponding z-plane.

History and methods

The diffraction theory of point spread functions was first studied by Airy in the nineteenth century. He developed an expression for the point spread function amplitude and intensity of a perfect instrument, free of aberrations (the so-called Airy disc). The theory of aberrated point spread functions close to the optimum focal plane was studied by Zernike and Nijboer in the 1930–40s. A central role in their analysis is played by Zernike's circle polynomials that allow an efficient representation of the aberrations of any optical system with rotational symmetry. Recent analytic results have made it possible to extend Nijboer and Zernike's approach for point spread function evaluation to a large volume around the optimum focal point. This extended Nijboer-Zernike (ENZ) theory allows studying the imperfect imaging of three-dimensional objects in confocal microscopy or astronomy under non-ideal imaging conditions. The ENZ-theory has also been applied to the characterization of optical instruments with respect to their aberration by measuring the through-focus intensity distribution and solving an appropriate inverse problem.

Applications

Microscopy

An example of an experimentally derived point spread function from a confocal microscope using a 63x 1.4NA oil objective. It was generated using Huygens Professional deconvolution software. Shown are views in xz, xy, yz and a 3D representation. 63x 1.4NA Confocal Point Spread Function 2+3D.png
An example of an experimentally derived point spread function from a confocal microscope using a 63x 1.4NA oil objective. It was generated using Huygens Professional deconvolution software. Shown are views in xz, xy, yz and a 3D representation.

In microscopy, experimental determination of PSF requires sub-resolution (point-like) radiating sources. Quantum dots and fluorescent beads are usually considered for this purpose. [6] [7] Theoretical models as described above, on the other hand, allow the detailed calculation of the PSF for various imaging conditions. The most compact diffraction limited shape of the PSF is usually preferred. However, by using appropriate optical elements (e.g., a spatial light modulator) the shape of the PSF can be engineered towards different applications.

Astronomy

The point spread function of Hubble Space Telescope's WFPC camera before corrections were applied to its optical system. Hubble PSF with flawed optics.jpg
The point spread function of Hubble Space Telescope's WFPC camera before corrections were applied to its optical system.

In observational astronomy, the experimental determination of a PSF is often very straightforward due to the ample supply of point sources (stars or quasars). The form and source of the PSF may vary widely depending on the instrument and the context in which it is used.

For radio telescopes and diffraction-limited space telescopes, the dominant terms in the PSF may be inferred from the configuration of the aperture in the Fourier domain. In practice, there may be multiple terms contributed by the various components in a complex optical system. A complete description of the PSF will also include diffusion of light (or photo-electrons) in the detector, as well as tracking errors in the spacecraft or telescope.

For ground-based optical telescopes, atmospheric turbulence (known as astronomical seeing) dominates the contribution to the PSF. In high-resolution ground-based imaging, the PSF is often found to vary with position in the image (an effect called anisoplanatism). In ground-based adaptive optics systems, the PSF is a combination of the aperture of the system with residual uncorrected atmospheric terms. [8]

Lithography

Overlapped PSF peaks. When the peaks are as close as ~ 1 wavelength/NA, they are effectively merged. The FWHM is ~ 0.6 wavelength/NA at this point. Airy spot overlap.png
Overlapped PSF peaks. When the peaks are as close as ~ 1 wavelength/NA, they are effectively merged. The FWHM is ~ 0.6 wavelength/NA at this point.

The PSF is also a fundamental limit to the conventional focused imaging of a hole, [9] with the minimum printed size being in the range of 0.6-0.7 wavelength/NA, with NA being the numerical aperture of the imaging system. [10] [11] For example, in the case of an EUV system with wavelength of 13.5 nm and NA=0.33, the minimum individual hole size that can be imaged is in the range of 25-29 nm. A phase-shift mask has 180-degree phase edges which allow finer resolution. [9]

Ophthalmology

Point spread functions have recently become a useful diagnostic tool in clinical ophthalmology. Patients are measured with a Shack-Hartmann wavefront sensor, and special software calculates the PSF for that patient's eye. This method allows a physician to simulate potential treatments on a patient, and estimate how those treatments would alter the patient's PSF. Additionally, once measured the PSF can be minimized using an adaptive optics system. This, in conjunction with a CCD camera and an adaptive optics system, can be used to visualize anatomical structures not otherwise visible in vivo, such as cone photoreceptors. [12]

See also

Related Research Articles

<span class="mw-page-title-main">Optical aberration</span> Deviation from perfect paraxial optical behavior

In optics, aberration is a property of optical systems, such as lenses, that causes light to be spread out over some region of space rather than focused to a point. Aberrations cause the image formed by a lens to be blurred or distorted, with the nature of the distortion depending on the type of aberration. Aberration can be defined as a departure of the performance of an optical system from the predictions of paraxial optics. In an imaging system, it occurs when light from one point of an object does not converge into a single point after transmission through the system. Aberrations occur because the simple paraxial theory is not a completely accurate model of the effect of an optical system on light, rather than due to flaws in the optical elements.

<span class="mw-page-title-main">Diffraction</span> Phenomenon of the motion of waves

Diffraction is the interference or bending of waves around the corners of an obstacle or through an aperture into the region of geometrical shadow of the obstacle/aperture. The diffracting object or aperture effectively becomes a secondary source of the propagating wave. Italian scientist Francesco Maria Grimaldi coined the word diffraction and was the first to record accurate observations of the phenomenon in 1660.

<span class="mw-page-title-main">Microscopy</span> Viewing of objects which are too small to be seen with the naked eye

Microscopy is the technical field of using microscopes to view objects and areas of objects that cannot be seen with the naked eye. There are three well-known branches of microscopy: optical, electron, and scanning probe microscopy, along with the emerging field of X-ray microscopy.

<span class="mw-page-title-main">Numerical aperture</span> Characteristic of an optical system

In optics, the numerical aperture (NA) of an optical system is a dimensionless number that characterizes the range of angles over which the system can accept or emit light. By incorporating index of refraction in its definition, NA has the property that it is constant for a beam as it goes from one material to another, provided there is no refractive power at the interface. The exact definition of the term varies slightly between different areas of optics. Numerical aperture is commonly used in microscopy to describe the acceptance cone of an objective, and in fiber optics, in which it describes the range of angles within which light that is incident on the fiber will be transmitted along it.

<span class="mw-page-title-main">Angular resolution</span> Ability of any image-forming device to distinguish small details of an object

Angular resolution describes the ability of any image-forming device such as an optical or radio telescope, a microscope, a camera, or an eye, to distinguish small details of an object, thereby making it a major determinant of image resolution. It is used in optics applied to light waves, in antenna theory applied to radio waves, and in acoustics applied to sound waves. The colloquial use of the term "resolution" sometimes causes confusion; when an optical system is said to have a high resolution or high angular resolution, it means that the perceived distance, or actual angular distance, between resolved neighboring objects is small. The value that quantifies this property, θ, which is given by the Rayleigh criterion, is low for a system with a high resolution. The closely related term spatial resolution refers to the precision of a measurement with respect to space, which is directly connected to angular resolution in imaging instruments. The Rayleigh criterion shows that the minimum angular spread that can be resolved by an image forming system is limited by diffraction to the ratio of the wavelength of the waves to the aperture width. For this reason, high resolution imaging systems such as astronomical telescopes, long distance telephoto camera lenses and radio telescopes have large apertures.

<span class="mw-page-title-main">Deconvolution</span> Reconstruction of a filtered signal

In mathematics, deconvolution is the operation inverse to convolution. Both operations are used in signal processing and image processing. For example, it may be possible to recover the original signal after a filter (convolution) by using a deconvolution method with a certain degree of accuracy. Due to the measurement error of the recorded signal or image, it can be demonstrated that the worse the signal-to-noise ratio (SNR), the worse the reversing of a filter will be; hence, inverting a filter is not always a good solution as the error amplifies. Deconvolution offers a solution to this problem.

Fourier optics is the study of classical optics using Fourier transforms (FTs), in which the waveform being considered is regarded as made up of a combination, or superposition, of plane waves. It has some parallels to the Huygens–Fresnel principle, in which the wavefront is regarded as being made up of a combination of spherical wavefronts whose sum is the wavefront being studied. A key difference is that Fourier optics considers the plane waves to be natural modes of the propagation medium, as opposed to Huygens–Fresnel, where the spherical waves originate in the physical medium.

<span class="mw-page-title-main">Diffraction-limited system</span> Optical system with resolution performance at the instruments theoretical limit

In optics, any optical instrument or system – a microscope, telescope, or camera – has a principal limit to its resolution due to the physics of diffraction. An optical instrument is said to be diffraction-limited if it has reached this limit of resolution performance. Other factors may affect an optical system's performance, such as lens imperfections or aberrations, but these are caused by errors in the manufacture or calculation of a lens, whereas the diffraction limit is the maximum resolution possible for a theoretically perfect, or ideal, optical system.

<span class="mw-page-title-main">Airy disk</span> Diffraction pattern in optics

In optics, the Airy disk and Airy pattern are descriptions of the best-focused spot of light that a perfect lens with a circular aperture can make, limited by the diffraction of light. The Airy disk is of importance in physics, optics, and astronomy.

Optical resolution describes the ability of an imaging system to resolve detail, in the object that is being imaged. An imaging system may have many individual components, including one or more lenses, and/or recording and display components. Each of these contributes to the optical resolution of the system; the environment in which the imaging is done often is a further important factor.

<span class="mw-page-title-main">Blind deconvolution</span>

In electrical engineering and applied mathematics, blind deconvolution is deconvolution without explicit knowledge of the impulse response function used in the convolution. This is usually achieved by making appropriate assumptions of the input to estimate the impulse response by analyzing the output. Blind deconvolution is not solvable without making assumptions on input and impulse response. Most of the algorithms to solve this problem are based on assumption that both input and impulse response live in respective known subspaces. However, blind deconvolution remains a very challenging non-convex optimization problem even with this assumption.

<span class="mw-page-title-main">Spatial filter</span>

A spatial filter is an optical device which uses the principles of Fourier optics to alter the structure of a beam of light or other electromagnetic radiation, typically coherent laser light. Spatial filtering is commonly used to "clean up" the output of lasers, removing aberrations in the beam due to imperfect, dirty, or damaged optics, or due to variations in the laser gain medium itself. This filtering can be applied to transmit a pure transverse mode from a multimode laser while blocking other modes emitted from the optical resonator. The term "filtering" indicates that the desirable structural features of the original source pass through the filter, while the undesirable features are blocked. An apparatus which follows the filter effectively sees a higher-quality but lower-powered image of the source, instead of the actual source directly. An example of the use of spatial filter can be seen in advanced setup of micro-Raman spectroscopy.

<span class="mw-page-title-main">Optical transfer function</span> Function that specifies how different spatial frequencies are captured by an optical system

The optical transfer function (OTF) of an optical system such as a camera, microscope, human eye, or projector specifies how different spatial frequencies are captured or transmitted. It is used by optical engineers to describe how the optics project light from the object or scene onto a photographic film, detector array, retina, screen, or simply the next item in the optical transmission chain. A variant, the modulation transfer function (MTF), neglects phase effects, but is equivalent to the OTF in many situations.

<span class="mw-page-title-main">Defocus aberration</span> Quality of an image being out of focus

In optics, defocus is the aberration in which an image is simply out of focus. This aberration is familiar to anyone who has used a camera, videocamera, microscope, telescope, or binoculars. Optically, defocus refers to a translation of the focus along the optical axis away from the detection surface. In general, defocus reduces the sharpness and contrast of the image. What should be sharp, high-contrast edges in a scene become gradual transitions. Fine detail in the scene is blurred or even becomes invisible. Nearly all image-forming optical devices incorporate some form of focus adjustment to minimize defocus and maximize image quality.

<span class="mw-page-title-main">High-resolution transmission electron microscopy</span>

High-resolution transmission electron microscopy is an imaging mode of specialized transmission electron microscopes that allows for direct imaging of the atomic structure of samples. It is a powerful tool to study properties of materials on the atomic scale, such as semiconductors, metals, nanoparticles and sp2-bonded carbon. While this term is often also used to refer to high resolution scanning transmission electron microscopy, mostly in high angle annular dark field mode, this article describes mainly the imaging of an object by recording the two-dimensional spatial wave amplitude distribution in the image plane, similar to a "classic" light microscope. For disambiguation, the technique is also often referred to as phase contrast transmission electron microscopy, although this term is less appropriate. At present, the highest point resolution realised in high resolution transmission electron microscopy is around 0.5 ångströms (0.050 nm). At these small scales, individual atoms of a crystal and defects can be resolved. For 3-dimensional crystals, it is necessary to combine several views, taken from different angles, into a 3D map. This technique is called electron tomography.

The Strehl ratio is a measure of the quality of optical image formation, originally proposed by Karl Strehl, after whom the term is named. Used variously in situations where optical resolution is compromised due to lens aberrations or due to imaging through the turbulent atmosphere, the Strehl ratio has a value between 0 and 1, with a hypothetical, perfectly unaberrated optical system having a Strehl ratio of 1.

<span class="mw-page-title-main">Contrast transfer function</span>

The contrast transfer function (CTF) mathematically describes how aberrations in a transmission electron microscope (TEM) modify the image of a sample. This contrast transfer function (CTF) sets the resolution of high-resolution transmission electron microscopy (HRTEM), also known as phase contrast TEM.

The pupil function or aperture function describes how a light wave is affected upon transmission through an optical imaging system such as a camera, microscope, or the human eye. More specifically, it is a complex function of the position in the pupil or aperture that indicates the relative change in amplitude and phase of the light wave. Sometimes this function is referred to as the generalized pupil function, in which case pupil function only indicates whether light is transmitted or not. Imperfections in the optics typically have a direct effect on the pupil function, it is therefore an important tool to study optical imaging systems and their performance.

Super-resolution photoacoustic imaging is a set of techniques used to enhance spatial resolution in photoacoustic imaging. Specifically, these techniques primarily break the optical diffraction limit of the photoacoustic imaging system. It can be achieved in a variety of mechanisms, such as blind structured illumination, multi-speckle illumination, or photo-imprint photoacoustic microscopy in Figure 1.

Lightfieldmicroscopy (LFM) is a scanning-free 3-dimensional (3D) microscopic imaging method based on the theory of light field. This technique allows sub-second (~10 Hz) large volumetric imaging with ~1 μm spatial resolution in the condition of weak scattering and semi-transparence, which has never been achieved by other methods. Just as in traditional light field rendering, there are two steps for LFM imaging: light field capture and processing. In most setups, a microlens array is used to capture the light field. As for processing, it can be based on two kinds of representations of light propagation: the ray optics picture and the wave optics picture. The Stanford University Computer Graphics Laboratory published their first prototype LFM in 2006 and has been working on the cutting edge since then.

References

  1. Progress in Optics. Elsevier. 2008-01-25. p. 355. ISBN   978-0-08-055768-7.
  2. 1 2 Ahi, Kiarash; Anwar, Mehdi (May 26, 2016). Anwar, Mehdi F; Crowe, Thomas W; Manzur, Tariq (eds.). "Developing terahertz imaging equation and enhancement of the resolution of terahertz images using deconvolution". Proc. SPIE 9856, Terahertz Physics, Devices, and Systems X: Advanced Applications in Industry and Defense, 98560N. Terahertz Physics, Devices, and Systems X: Advanced Applications in Industry and Defense. 9856: 98560N. Bibcode:2016SPIE.9856E..0NA. doi:10.1117/12.2228680. S2CID   114994724.
  3. Ahi, Kiarash; Anwar, Mehdi (May 26, 2016). Anwar, Mehdi F; Crowe, Thomas W; Manzur, Tariq (eds.). "Modeling of terahertz images based on x-ray images: a novel approach for verification of terahertz images and identification of objects with fine details beyond terahertz resolution". Proc. SPIE 9856, Terahertz Physics, Devices, and Systems X: Advanced Applications in Industry and Defense, 98560N. Terahertz Physics, Devices, and Systems X: Advanced Applications in Industry and Defense. 9856: 985610. Bibcode:2016SPIE.9856E..10A. doi:10.1117/12.2228685. S2CID   124315172.
  4. Ahi, Kiarash; Shahbazmohamadi, Sina; Asadizanjani, Navid (July 2017). "Quality control and authentication of packaged integrated circuits using enhanced-spatial-resolution terahertz time-domain spectroscopy and imaging". Optics and Lasers in Engineering. 104: 274–284. Bibcode:2018OptLE.104..274A. doi:10.1016/j.optlaseng.2017.07.007.
  5. Ahi, K. (November 2017). "Mathematical Modeling of THz Point Spread Function and Simulation of THz Imaging Systems". IEEE Transactions on Terahertz Science and Technology. 7 (6): 747–754. Bibcode:2017ITTST...7..747A. doi:10.1109/tthz.2017.2750690. ISSN   2156-342X. S2CID   11781848.
  6. Light transmitted through minute holes in a thin layer of silver vacuum or chemically deposited on a slide or cover-slip have also been used, as they are bright and do not photo-bleach. S. Courty; C. Bouzigues; C. Luccardini; M-V Ehrensperger; S. Bonneau & M. Dahan (2006). "Tracking individual proteins in living cells using single quantum dot imaging". In James Inglese (ed.). Methods in Enzymology: Measuring biological responses with automated microscopy, Volume 414 . Academic Press. pp.  223–224. ISBN   978-0-12-182819-6.
  7. P. J. Shaw & D. J. Rawlins (August 1991). "The point-spread function of a confocal microscope: its measurement and use in deconvolution of 3-D data". Journal of Microscopy. 163 (2): 151–165. doi:10.1111/j.1365-2818.1991.tb03168.x. S2CID   95121909.
  8. "POINT SPREAD FUNCTION (PSF)". www.telescope-optics.net. Retrieved 2017-12-30.
  9. 1 2 The Natural Resolution
  10. Principles and Practice of Light Microscopy
  11. Corner Rounding and Line-end Shortening
  12. Roorda, Austin; Romero-Borja, Fernando; Iii, William J. Donnelly; Queener, Hope; Hebert, Thomas J.; Campbell, Melanie C. W. (2002-05-06). "Adaptive optics scanning laser ophthalmoscopy". Optics Express. 10 (9): 405–412. Bibcode:2002OExpr..10..405R. doi: 10.1364/OE.10.000405 . ISSN   1094-4087. PMID   19436374. S2CID   21971504.