Optical transfer function

Last updated
Illustration of the optical transfer function (OTF) and its relation to image quality. The optical transfer function of a well-focused (a), and an out-of-focus optical imaging system without aberrations (d). As the optical transfer function of these systems is real and non-negative, the optical transfer function is by definition equal to the modulation transfer function (MTF). Images of a point source and a spoke target with high spatial frequency are shown in (b,e) and (c,f), respectively. Note that the scale of the point source images (b,e) is four times smaller than the spoke target images. Illustration of the optical transfer function and its relation to image quality.svg
Illustration of the optical transfer function (OTF) and its relation to image quality. The optical transfer function of a well-focused (a), and an out-of-focus optical imaging system without aberrations (d). As the optical transfer function of these systems is real and non-negative, the optical transfer function is by definition equal to the modulation transfer function (MTF). Images of a point source and a spoke target with high spatial frequency are shown in (b,e) and (c,f), respectively. Note that the scale of the point source images (b,e) is four times smaller than the spoke target images.

The optical transfer function (OTF) of an optical system such as a camera, microscope, human eye, or projector specifies how different spatial frequencies are captured or transmitted. It is used by optical engineers to describe how the optics project light from the object or scene onto a photographic film, detector array, retina, screen, or simply the next item in the optical transmission chain. A variant, the modulation transfer function (MTF), neglects phase effects, but is equivalent to the OTF in many situations.

Contents

Either transfer function specifies the response to a periodic sine-wave pattern passing through the lens system, as a function of its spatial frequency or period, and its orientation. Formally, the OTF is defined as the Fourier transform of the point spread function (PSF, that is, the impulse response of the optics, the image of a point source). As a Fourier transform, the OTF is complex-valued; but it will be real-valued in the common case of a PSF that is symmetric about its center. The MTF is formally defined as the magnitude (absolute value) of the complex OTF.

The image on the right shows the optical transfer functions for two different optical systems in panels (a) and (d). The former corresponds to the ideal, diffraction-limited, imaging system with a circular pupil. Its transfer function decreases approximately gradually with spatial frequency until it reaches the diffraction-limit, in this case at 500 cycles per millimeter or a period of 2 μm. Since periodic features as small as this period are captured by this imaging system, it could be said that its resolution is 2 μm. [1] Panel (d) shows an optical system that is out of focus. This leads to a sharp reduction in contrast compared to the diffraction-limited imaging system. It can be seen that the contrast is zero around 250 cycles/mm, or periods of 4 μm. This explains why the images for the out-of-focus system (e,f) are more blurry than those of the diffraction-limited system (b,c). Note that although the out-of-focus system has very low contrast at spatial frequencies around 250 cycles/mm, the contrast at spatial frequencies near the diffraction limit of 500 cycles/mm is diffraction-limited. Close observation of the image in panel (f) shows that the image of the large spoke densities near the center of the spoke target is relatively sharp.

Since the optical transfer function [2] (OTF) is defined as the Fourier transform of the point-spread function (PSF), it is generally speaking a complex-valued function of spatial frequency. The projection of a specific periodic pattern is represented by a complex number with absolute value and complex argument proportional to the relative contrast and translation of the projected projection, respectively.

Various closely related characterizations of an optical system exhibiting coma, a typical aberration that occurs off-axis. (a) The point-spread function (PSF) is the image of a point source. (b) The image of a line is referred to as the line-spread function, in this case a vertical line. The line-spread function is directly proportional to the vertical integration of the point-spread image. The optical-transfer function (OTF) is defined as the Fourier transform of the point-spread function and is thus generally a two-dimensional complex function. Typically only a one-dimensional slice is shown (c), corresponding to the Fourier transform of the line-spread function. The thick green line indicates the real part of the function, and the thin red line the imaginary part. Often only the absolute value of the complex function is shown, this allows visualization of the two-dimensional function (d); however, more commonly only the one-dimensional function is shown (e). The latter is typically normalized at the spatial frequency zero and referred to as the modulation transfer function (MTF). For completeness, the complex argument is sometimes provided as the phase transfer function (PhTF), shown in panel (f). Definitions PSF OTF MTF PhTF.svg
Various closely related characterizations of an optical system exhibiting coma, a typical aberration that occurs off-axis. (a) The point-spread function (PSF) is the image of a point source. (b) The image of a line is referred to as the line-spread function, in this case a vertical line. The line-spread function is directly proportional to the vertical integration of the point-spread image. The optical-transfer function (OTF) is defined as the Fourier transform of the point-spread function and is thus generally a two-dimensional complex function. Typically only a one-dimensional slice is shown (c), corresponding to the Fourier transform of the line-spread function. The thick green line indicates the real part of the function, and the thin red line the imaginary part. Often only the absolute value of the complex function is shown, this allows visualization of the two-dimensional function (d); however, more commonly only the one-dimensional function is shown (e). The latter is typically normalized at the spatial frequency zero and referred to as the modulation transfer function (MTF). For completeness, the complex argument is sometimes provided as the phase transfer function (PhTF), shown in panel (f).
DimensionsSpatial functionFourier transform
1DLine-spread function
(derivative of edge-spread function)
1D section of 2D optical-transfer function
2DPoint-spread function(2D) Optical transfer function
3D3D Point-spread function3D Optical-transfer function

Often the contrast reduction is of most interest and the translation of the pattern can be ignored. The relative contrast is given by the absolute value of the optical transfer function, a function commonly referred to as the modulation transfer function (MTF). Its values indicate how much of the object's contrast is captured in the image as a function of spatial frequency. The MTF tends to decrease with increasing spatial frequency from 1 to 0 (at the diffraction limit); however, the function is often not monotonic. On the other hand, when also the pattern translation is important, the complex argument of the optical transfer function can be depicted as a second real-valued function, commonly referred to as the phase transfer function (PhTF). The complex-valued optical transfer function can be seen as a combination of these two real-valued functions:

where

and represents the complex argument function, while is the spatial frequency of the periodic pattern. In general is a vector with a spatial frequency for each dimension, i.e. it indicates also the direction of the periodic pattern.

The impulse response of a well-focused optical system is a three-dimensional intensity distribution with a maximum at the focal plane, and could thus be measured by recording a stack of images while displacing the detector axially. By consequence, the three-dimensional optical transfer function can be defined as the three-dimensional Fourier transform of the impulse response. Although typically only a one-dimensional, or sometimes a two-dimensional section is used, the three-dimensional optical transfer function can improve the understanding of microscopes such as the structured illumination microscope.

True to the definition of transfer function, should indicate the fraction of light that was detected from the point source object. However, typically the contrast relative to the total amount of detected light is most important. It is thus common practice to normalize the optical transfer function to the detected intensity, hence .

Generally, the optical transfer function depends on factors such as the spectrum and polarization of the emitted light and the position of the point source. E.g. the image contrast and resolution are typically optimal at the center of the image, and deteriorate toward the edges of the field-of-view. When significant variation occurs, the optical transfer function may be calculated for a set of representative positions or colors.

Sometimes it is more practical to define the transfer functions based on a binary black-white stripe pattern. The transfer function for an equal-width black-white periodic pattern is referred to as the contrast transfer function (CTF). [3]

Examples

The OTF of an ideal lens system

A perfect lens system will provide a high contrast projection without shifting the periodic pattern, hence the optical transfer function is identical to the modulation transfer function. Typically the contrast will reduce gradually towards zero at a point defined by the resolution of the optics. For example, a perfect, non-aberrated, f/4 optical imaging system used, at the visible wavelength of 500 nm, would have the optical transfer function depicted in the right hand figure.

1D diffraction limited optical transfer function..svg
The one-dimensional optical transfer function of a diffraction limited imaging system is identical to its modulation transfer function.
Spoke target imaged by a diffraction limited imaging system..svg
Spoke target imaged by a diffraction limited imaging system.
Transfer function and example image of an ideal, optical-aberration-free (diffraction-limited) imaging system.

It can be read from the plot that the contrast gradually reduces and reaches zero at the spatial frequency of 500 cycles per millimeter, in other words the optical resolution of the image projection is 1/500th of a millimeter, or 2 micrometer. Correspondingly, for this particular imaging device, the spokes become more and more blurred towards the center until they merge into a gray, unresolved, disc. Note that sometimes the optical transfer function is given in units of the object or sample space, observation angle, film width, or normalized to the theoretical maximum. Conversion between the two is typically a matter of a multiplication or division. E.g. a microscope typically magnifies everything 10 to 100-fold, and a reflex camera will generally demagnify objects at a distance of 5 meter by a factor of 100 to 200.

The resolution of a digital imaging device is not only limited by the optics, but also by the number of pixels, more in particular by their separation distance. As explained by the Nyquist–Shannon sampling theorem, to match the optical resolution of the given example, the pixels of each color channel should be separated by 1 micrometer, half the period of 500 cycles per millimeter. A higher number of pixels on the same sensor size will not allow the resolution of finer detail. On the other hand, when the pixel spacing is larger than 1 micrometer, the resolution will be limited by the separation between pixels; moreover, aliasing may lead to a further reduction of the image fidelity.

OTF of an imperfect lens system

An imperfect, aberrated imaging system could possess the optical transfer function depicted in the following figure.

OTF of an optical system with spherical aberration..svg
The real part of the optical transfer function of an aberrated, imperfect imaging system.
MTF of an optical system with spherical aberration..svg
The modulation transfer function of an aberrated, imperfect, imaging system.
Spoke target imaged with spherical aberration.svg
The image of a spoke target as imaged by an aberrated optical system.
Transfer function and example image of an f/4 optical imaging system at 500 nm with spherical aberration with standard Zernike coefficient of 0.25.

As the ideal lens system, the contrast reaches zero at the spatial frequency of 500 cycles per millimeter. However, at lower spatial frequencies the contrast is considerably lower than that of the perfect system in the previous example. In fact, the contrast becomes zero on several occasions even for spatial frequencies lower than 500 cycles per millimeter. This explains the gray circular bands in the spoke image shown in the above figure. In between the gray bands, the spokes appear to invert from black to white and vice versa, this is referred to as contrast inversion, directly related to the sign reversal in the real part of the optical transfer function, and represents itself as a shift by half a period for some periodic patterns.

While it could be argued that the resolution of both the ideal and the imperfect system is 2 μm, or 500 LP/mm, it is clear that the images of the latter example are less sharp. A definition of resolution that is more in line with the perceived quality would instead use the spatial frequency at which the first zero occurs, 10 μm, or 100 LP/mm. Definitions of resolution, even for perfect imaging systems, vary widely. A more complete, unambiguous picture is provided by the optical transfer function.

The OTF of an optical system with a non-rotational symmetric aberration

When viewed through an optical system with trefoil aberration, the image of a point object will look as a three-pointed star (a). As the point-spread function is not rotational symmetric, only a two-dimensional optical transfer function can describe it well (b). The height of the surface plot indicates the absolute value and the hue indicates the complex argument of the function. A spoke target imaged by such an imaging device is shown by the simulation in (c). Trefoil aberration PSF OTF and example image.svg
When viewed through an optical system with trefoil aberration, the image of a point object will look as a three-pointed star (a). As the point-spread function is not rotational symmetric, only a two-dimensional optical transfer function can describe it well (b). The height of the surface plot indicates the absolute value and the hue indicates the complex argument of the function. A spoke target imaged by such an imaging device is shown by the simulation in (c).

Optical systems, and in particular optical aberrations are not always rotationally symmetric. Periodic patterns that have a different orientation can thus be imaged with different contrast even if their periodicity is the same. Optical transfer function or modulation transfer functions are thus generally two-dimensional functions. The following figures shows the two-dimensional equivalent of the ideal and the imperfect system discussed earlier, for an optical system with trefoil, a non-rotational-symmetric aberration.

Optical transfer functions are not always real-valued. Period patterns can be shifted by any amount, depending on the aberration in the system. This is generally the case with non-rotational-symmetric aberrations. The hue of the colors of the surface plots in the above figure indicate phase. It can be seen that, while for the rotational symmetric aberrations the phase is either 0 or π and thus the transfer function is real valued, for the non-rotational symmetric aberration the transfer function has an imaginary component and the phase varies continuously.

Practical example – high-definition video system

While optical resolution, as commonly used with reference to camera systems, describes only the number of pixels in an image, and hence the potential to show fine detail, the transfer function describes the ability of adjacent pixels to change from black to white in response to patterns of varying spatial frequency, and hence the actual capability to show fine detail, whether with full or reduced contrast. An image reproduced with an optical transfer function that 'rolls off' at high spatial frequencies will appear 'blurred' in everyday language.

Taking the example of a current high definition (HD) video system, with 1920 by 1080 pixels, the Nyquist theorem states that it should be possible, in a perfect system, to resolve fully (with true black to white transitions) a total of 1920 black and white alternating lines combined, otherwise referred to as a spatial frequency of 1920/2=960 line pairs per picture width, or 960 cycles per picture width, (definitions in terms of cycles per unit angle or per mm are also possible but generally less clear when dealing with cameras and more appropriate to telescopes etc.). In practice, this is far from the case, and spatial frequencies that approach the Nyquist rate will generally be reproduced with decreasing amplitude, so that fine detail, though it can be seen, is greatly reduced in contrast. This gives rise to the interesting observation that, for example, a standard definition television picture derived from a film scanner that uses oversampling, as described later, may appear sharper than a high definition picture shot on a camera with a poor modulation transfer function. The two pictures show an interesting difference that is often missed, the former having full contrast on detail up to a certain point but then no really fine detail, while the latter does contain finer detail, but with such reduced contrast as to appear inferior overall.

The three-dimensional optical transfer function

The three-dimensional point spread functions (a,c) and corresponding modulation transfer functions (b,d) of a wide-field microscope (a,b) and confocal microscope (c,d). In both cases the numerical aperture of the objective is 1.49 and the refractive index of the medium 1.52. The wavelength of the emitted light is assumed to be 600 nm and, in case of the confocal microscope, that of the excitation light 500 nm with circular polarization. A section is cut to visualize the internal intensity distribution. The colors as shown on the logarithmic color scale indicate the irradiance (a,c) and spectral density (b,d) normalized to the maximum value. 3DPSF 3DMTF widefield confocal.png
The three-dimensional point spread functions (a,c) and corresponding modulation transfer functions (b,d) of a wide-field microscope (a,b) and confocal microscope (c,d). In both cases the numerical aperture of the objective is 1.49 and the refractive index of the medium 1.52. The wavelength of the emitted light is assumed to be 600 nm and, in case of the confocal microscope, that of the excitation light 500 nm with circular polarization. A section is cut to visualize the internal intensity distribution. The colors as shown on the logarithmic color scale indicate the irradiance (a,c) and spectral density (b,d) normalized to the maximum value.

Although one typically thinks of an image as planar, or two-dimensional, the imaging system will produce a three-dimensional intensity distribution in image space that in principle can be measured. e.g. a two-dimensional sensor could be translated to capture a three-dimensional intensity distribution. The image of a point source is also a three dimensional (3D) intensity distribution which can be represented by a 3D point-spread function. As an example, the figure on the right shows the 3D point-spread function in object space of a wide-field microscope (a) alongside that of a confocal microscope (c). Although the same microscope objective with a numerical aperture of 1.49 is used, it is clear that the confocal point spread function is more compact both in the lateral dimensions (x,y) and the axial dimension (z). One could rightly conclude that the resolution of a confocal microscope is superior to that of a wide-field microscope in all three dimensions.

A three-dimensional optical transfer function can be calculated as the three-dimensional Fourier transform of the 3D point-spread function. Its color-coded magnitude is plotted in panels (b) and (d), corresponding to the point-spread functions shown in panels (a) and (c), respectively. The transfer function of the wide-field microscope has a support that is half of that of the confocal microscope in all three-dimensions, confirming the previously noted lower resolution of the wide-field microscope. Note that along the z-axis, for x = y = 0, the transfer function is zero everywhere except at the origin. This missing cone is a well-known problem that prevents optical sectioning using a wide-field microscope. [4]

The two-dimensional optical transfer function at the focal plane can be calculated by integration of the 3D optical transfer function along the z-axis. Although the 3D transfer function of the wide-field microscope (b) is zero on the z-axis for z  0; its integral, the 2D optical transfer, reaching a maximum at x = y = 0. This is only possible because the 3D optical transfer function diverges at the origin x = y = z = 0. The function values along the z-axis of the 3D optical transfer function correspond to the Dirac delta function.

Calculation

Most optical design software has functionality to compute the optical or modulation transfer function of a lens design. Ideal systems such as in the examples here are readily calculated numerically using software such as Julia, GNU Octave or Matlab, and in some specific cases even analytically. The optical transfer function can be calculated following two approaches: [5]

  1. as the Fourier transform of the incoherent point spread function, or
  2. as the auto-correlation of the pupil function of the optical system

Mathematically both approaches are equivalent. Numeric calculations are typically most efficiently done via the Fourier transform; however, analytic calculation may be more tractable using the auto-correlation approach.

Example

Ideal lens system with circular aperture

Auto-correlation of the pupil function

Since the optical transfer function is the Fourier transform of the point spread function, and the point spread function is the square absolute of the inverse Fourier transformed pupil function, the optical transfer function can also be calculated directly from the pupil function. From the convolution theorem it can be seen that the optical transfer function is in fact the autocorrelation of the pupil function. [5]

The pupil function of an ideal optical system with a circular aperture is a disk of unit radius. The optical transfer function of such a system can thus be calculated geometrically from the intersecting area between two identical disks at a distance of , where is the spatial frequency normalized to the highest transmitted frequency. [2] In general the optical transfer function is normalized to a maximum value of one for , so the resulting area should be divided by .

The intersecting area can be calculated as the sum of the areas of two identical circular segments: , where is the circle segment angle. By substituting , and using the equalities and , the equation for the area can be rewritten as . Hence the normalized optical transfer function is given by:

A more detailed discussion can be found in [5] and. [2] :152–153

Numerical evaluation

The one-dimensional optical transfer function can be calculated as the discrete Fourier transform of the line spread function. This data is graphed against the spatial frequency data. In this case, a sixth order polynomial is fitted to the MTF vs. spatial frequency curve to show the trend. The 50% cutoff frequency is determined to yield the corresponding spatial frequency. Thus, the approximate position of best focus of the unit under test is determined from this data.

The MTF data versus spatial frequency is normalized by fitting a sixth order polynomial to it, making a smooth curve. The 50% cut-off frequency is determined and the corresponding spatial frequency is found, yielding the approximate position of best focus. MTF example graph.jpg
The MTF data versus spatial frequency is normalized by fitting a sixth order polynomial to it, making a smooth curve. The 50% cut-off frequency is determined and the corresponding spatial frequency is found, yielding the approximate position of best focus.

The Fourier transform of the line spread function (LSF) can not be determined analytically by the following equations[ citation needed ]:

Therefore, the Fourier Transform is numerically approximated using the discrete Fourier transform . [6]

where

The MTF is then plotted against spatial frequency and all relevant data concerning this test can be determined from that graph.

The vectorial transfer function

At high numerical apertures such as those found in microscopy, it is important to consider the vectorial nature of the fields that carry light. By decomposing the waves in three independent components corresponding to the Cartesian axes, a point spread function can be calculated for each component and combined into a vectorial point spread function. Similarly, a vectorial optical transfer function can be determined as shown in ( [7] ) and ( [8] ).

Measurement

The optical transfer function is not only useful for the design of optical system, it is also valuable to characterize manufactured systems.

Starting from the point spread function

The optical transfer function is defined as the Fourier transform of the impulse response of the optical system, also called the point spread function. The optical transfer function is thus readily obtained by first acquiring the image of a point source, and applying the two-dimensional discrete Fourier transform to the sampled image. Such a point-source can, for example, be a bright light behind a screen with a pin hole, a fluorescent or metallic microsphere, or simply a dot painted on a screen. Calculation of the optical transfer function via the point spread function is versatile as it can fully characterize optics with spatial varying and chromatic aberrations by repeating the procedure for various positions and wavelength spectra of the point source.

Using extended test objects for spatially invariant optics

When the aberrations can be assumed to be spatially invariant, alternative patterns can be used to determine the optical transfer function such as lines and edges. The corresponding transfer functions are referred to as the line-spread function and the edge-spread function, respectively. Such extended objects illuminate more pixels in the image, and can improve the measurement accuracy due to the larger signal-to-noise ratio. The optical transfer function is in this case calculated as the two-dimensional discrete Fourier transform of the image and divided by that of the extended object. Typically either a line or a black-white edge is used.

The line-spread function

The two-dimensional Fourier transform of a line through the origin, is a line orthogonal to it and through the origin. The divisor is thus zero for all but a single dimension, by consequence, the optical transfer function can only be determined for a single dimension using a single line-spread function (LSF). If necessary, the two-dimensional optical transfer function can be determined by repeating the measurement with lines at various angles.

The line spread function can be found using two different methods. It can be found directly from an ideal line approximation provided by a slit test target or it can be derived from the edge spread function, discussed in the next sub section.

Edge-spread function

The two-dimensional Fourier transform of an edge is also only non-zero on a single line, orthogonal to the edge. This function is sometimes referred to as the edge spread function (ESF). [9] [10] However, the values on this line are inversely proportional to the distance from the origin. Although the measurement images obtained with this technique illuminate a large area of the camera, this mainly benefits the accuracy at low spatial frequencies. As with the line spread function, each measurement only determines a single axes of the optical transfer function, repeated measurements are thus necessary if the optical system cannot be assumed to be rotational symmetric.

In evaluating the ESF, an operator defines a box area equivalent to 10%
of the total frame area of a knife-edge test target back-illuminated by a black body. The area is defined to encompass the edge of the target image. MTF knife-edge target.jpg
In evaluating the ESF, an operator defines a box area equivalent to 10% of the total frame area of a knife-edge test target back-illuminated by a black body. The area is defined to encompass the edge of the target image.

As shown in the right hand figure, an operator defines a box area encompassing the edge of a knife-edge test target image back-illuminated by a black body. The box area is defined to be approximately 10%[ citation needed ] of the total frame area. The image pixel data is translated into a two-dimensional array (pixel intensity and pixel position). The amplitude (pixel intensity) of each line within the array is normalized and averaged. This yields the edge spread function.

where

  • ESF = the output array of normalized pixel intensity data
  • = the input array of pixel intensity data
  • = the ith element of
  • = the average value of the pixel intensity data
  • = the standard deviation of the pixel intensity data
  • = number of pixels used in average

The line spread function is identical to the first derivative of the edge spread function, [11] which is differentiated using numerical methods. In case it is more practical to measure the edge spread function, one can determine the line spread function as follows:

Typically the ESF is only known at discrete points, so the LSF is numerically approximated using the finite difference:

where:

  • = the index
  • = position of the pixel
  • = ESF of the pixel

Using a grid of black and white lines

Although 'sharpness' is often judged on grid patterns of alternate black and white lines, it should strictly be measured using a sine-wave variation from black to white (a blurred version of the usual pattern). Where a square wave pattern is used (simple black and white lines) not only is there more risk of aliasing, but account must be taken of the fact that the fundamental component of a square wave is higher than the amplitude of the square wave itself (the harmonic components reduce the peak amplitude). A square wave test chart will therefore show optimistic results (better resolution of high spatial frequencies than is actually achieved). The square wave result is sometimes referred to as the 'contrast transfer function' (CTF).

Factors affecting MTF in typical camera systems

In practice, many factors result in considerable blurring of a reproduced image, such that patterns with spatial frequency just below the Nyquist rate may not even be visible, and the finest patterns that can appear 'washed out' as shades of grey, not black and white. A major factor is usually the impossibility of making the perfect 'brick wall' optical filter (often realized as a 'phase plate' or a lens with specific blurring properties in digital cameras and video camcorders). Such a filter is necessary to reduce aliasing by eliminating spatial frequencies above the Nyquist rate of the display.

Oversampling and downconversion to maintain the optical transfer function

The only way in practice to approach the theoretical sharpness possible in a digital imaging system such as a camera is to use more pixels in the camera sensor than samples in the final image, and 'downconvert' or 'interpolate' using special digital processing which cuts off high frequencies above the Nyquist rate to avoid aliasing whilst maintaining a reasonably flat MTF up to that frequency. This approach was first taken in the 1970s when flying spot scanners, and later CCD line scanners were developed, which sampled more pixels than were needed and then downconverted, which is why movies have always looked sharper on television than other material shot with a video camera. The only theoretically correct way to interpolate or downconvert is by use of a steep low-pass spatial filter, realized by convolution with a two-dimensional sin(x)/x weighting function which requires powerful processing. In practice, various mathematical approximations to this are used to reduce the processing requirement. These approximations are now implemented widely in video editing systems and in image processing programs such as Photoshop.

Just as standard definition video with a high contrast MTF is only possible with oversampling, so HD television with full theoretical sharpness is only possible by starting with a camera that has a significantly higher resolution, followed by digitally filtering. With movies now being shot in 4k and even 8k video for the cinema, we can expect to see the best pictures on HDTV only from movies or material shot at the higher standard. However much we raise the number of pixels used in cameras, this will always remain true in absence of a perfect optical spatial filter. Similarly, a 5-megapixel image obtained from a 5-megapixel still camera can never be sharper than a 5-megapixel image obtained after down-conversion from an equal quality 10-megapixel still camera. Because of the problem of maintaining a high contrast MTF, broadcasters like the BBC did for a long time consider maintaining standard definition television, but improving its quality by shooting and viewing with many more pixels (though as previously mentioned, such a system, though impressive, does ultimately lack the very fine detail which, though attenuated, enhances the effect of true HD viewing).

Another factor in digital cameras and camcorders is lens resolution. A lens may be said to 'resolve' 1920 horizontal lines, but this does not mean that it does so with full modulation from black to white. The 'modulation transfer function' (just a term for the magnitude of the optical transfer function with phase ignored) gives the true measure of lens performance, and is represented by a graph of amplitude against spatial frequency.

Lens aperture diffraction also limits MTF. Whilst reducing the aperture of a lens usually reduces aberrations and hence improves the flatness of the MTF, there is an optimum aperture for any lens and image sensor size beyond which smaller apertures reduce resolution because of diffraction, which spreads light across the image sensor. This was hardly a problem in the days of plate cameras and even 35 mm film, but has become an insurmountable limitation with the very small format sensors used in some digital cameras and especially video cameras. First generation HD consumer camcorders used 1/4-inch sensors, for which apertures smaller than about f4 begin to limit resolution. Even professional video cameras mostly use 2/3 inch sensors, prohibiting the use of apertures around f16 that would have been considered normal for film formats. Certain cameras (such as the Pentax K10D) feature an "MTF autoexposure" mode, where the choice of aperture is optimized for maximum sharpness. Typically this means somewhere in the middle of the aperture range. [12]

Trend to large-format DSLRs and improved MTF potential

There has recently been a shift towards the use of large image format digital single-lens reflex cameras driven by the need for low-light sensitivity and narrow depth of field effects. This has led to such cameras becoming preferred by some film and television program makers over even professional HD video cameras, because of their 'filmic' potential. In theory, the use of cameras with 16- and 21-megapixel sensors offers the possibility of almost perfect sharpness by downconversion within the camera, with digital filtering to eliminate aliasing. Such cameras produce very impressive results, and appear to be leading the way in video production towards large-format downconversion with digital filtering becoming the standard approach to the realization of a flat MTF with true freedom from aliasing.

Digital inversion of the OTF

Due to optical effects the contrast may be sub-optimal and approaches zero before the Nyquist frequency of the display is reached. The optical contrast reduction can be partially reversed by digitally amplifying spatial frequencies selectively before display or further processing. Although more advanced digital image restoration procedures exist, the Wiener deconvolution algorithm is often used for its simplicity and efficiency. Since this technique multiplies the spatial spectral components of the image, it also amplifies noise and errors due to e.g. aliasing. It is therefore only effective on good quality recordings with a sufficiently high signal-to-noise ratio.

Limitations

In general, the point spread function, the image of a point source also depends on factors such as the wavelength (color), and field angle (lateral point source position). When such variation is sufficiently gradual, the optical system could be characterized by a set of optical transfer functions. However, when the image of the point source changes abruptly upon lateral translation, the optical transfer function does not describe the optical system accurately.

See also

Related Research Articles

<span class="mw-page-title-main">Nyquist–Shannon sampling theorem</span> Sufficiency theorem for reconstructing signals from samples

The Nyquist–Shannon sampling theorem is an essential principle for digital signal processing linking the frequency range of a signal and the sample rate required to avoid a type of distortion called aliasing. The theorem states that the sample rate must be at least twice the bandwidth of the signal to avoid aliasing. In practice, it is used to select band-limiting filters to keep aliasing below an acceptable amount when an analog signal is sampled or when sample rates are changed within a digital signal processing function.

Fourier-transform spectroscopy is a measurement technique whereby spectra are collected based on measurements of the coherence of a radiative source, using time-domain or space-domain measurements of the radiation, electromagnetic or not. It can be applied to a variety of types of spectroscopy including optical spectroscopy, infrared spectroscopy, nuclear magnetic resonance (NMR) and magnetic resonance spectroscopic imaging (MRSI), mass spectrometry and electron spin resonance spectroscopy.

Fourier optics is the study of classical optics using Fourier transforms (FTs), in which the waveform being considered is regarded as made up of a combination, or superposition, of plane waves. It has some parallels to the Huygens–Fresnel principle, in which the wavefront is regarded as being made up of a combination of spherical wavefronts whose sum is the wavefront being studied. A key difference is that Fourier optics considers the plane waves to be natural modes of the propagation medium, as opposed to Huygens–Fresnel, where the spherical waves originate in the physical medium.

<span class="mw-page-title-main">Diffraction-limited system</span> Optical system with resolution performance at the instruments theoretical limit

In optics, any optical instrument or system – a microscope, telescope, or camera – has a principal limit to its resolution due to the physics of diffraction. An optical instrument is said to be diffraction-limited if it has reached this limit of resolution performance. Other factors may affect an optical system's performance, such as lens imperfections or aberrations, but these are caused by errors in the manufacture or calculation of a lens, whereas the diffraction limit is the maximum resolution possible for a theoretically perfect, or ideal, optical system.

In mathematics, the Hartley transform (HT) is an integral transform closely related to the Fourier transform (FT), but which transforms real-valued functions to real-valued functions. It was proposed as an alternative to the Fourier transform by Ralph V. L. Hartley in 1942, and is one of many known Fourier-related transforms. Compared to the Fourier transform, the Hartley transform has the advantages of transforming real functions to real functions and of being its own inverse.

<span class="mw-page-title-main">Point spread function</span> Response in an optical imaging system

The point spread function (PSF) describes the response of a focused optical imaging system to a point source or point object. A more general term for the PSF is the system's impulse response; the PSF is the impulse response or impulse response function (IRF) of a focused optical imaging system. The PSF in many contexts can be thought of as the extended blob in an image that represents a single point object, that is considered as a spatial impulse. In functional terms, it is the spatial domain version of the optical transfer function (OTF) of an imaging system. It is a useful concept in Fourier optics, astronomical imaging, medical imaging, electron microscopy and other imaging techniques such as 3D microscopy and fluorescence microscopy.

Optical resolution describes the ability of an imaging system to resolve detail, in the object that is being imaged. An imaging system may have many individual components, including one or more lenses, and/or recording and display components. Each of these contributes to the optical resolution of the system; the environment in which the imaging is done often is a further important factor.

Minimum resolvable temperature difference (MRTD) is a measure for assessing the performance of infrared cameras, and is inversely proportional to the modulation transfer function.

The following are common definitions related to the machine vision field.

<span class="mw-page-title-main">High-resolution transmission electron microscopy</span>

High-resolution transmission electron microscopy is an imaging mode of specialized transmission electron microscopes that allows for direct imaging of the atomic structure of samples. It is a powerful tool to study properties of materials on the atomic scale, such as semiconductors, metals, nanoparticles and sp2-bonded carbon. While this term is often also used to refer to high resolution scanning transmission electron microscopy, mostly in high angle annular dark field mode, this article describes mainly the imaging of an object by recording the two-dimensional spatial wave amplitude distribution in the image plane, similar to a "classic" light microscope. For disambiguation, the technique is also often referred to as phase contrast transmission electron microscopy, although this term is less appropriate. At present, the highest point resolution realised in high resolution transmission electron microscopy is around 0.5 ångströms (0.050 nm). At these small scales, individual atoms of a crystal and defects can be resolved. For 3-dimensional crystals, it is necessary to combine several views, taken from different angles, into a 3D map. This technique is called electron tomography.

<span class="mw-page-title-main">Contrast (vision)</span> Visible difference in brightness or color

Contrast is the difference in luminance or color that makes an object visible against a background of different luminance or color. The human visual system is more sensitive to contrast than to absolute luminance; thus, we can perceive the world similarly despite significant changes in illumination throughout the day or across different locations.

<span class="mw-page-title-main">Image sensor format</span> Shape and size of a digital cameras image sensor

In digital photography, the image sensor format is the shape and size of the image sensor.

In statistical signal processing, the goal of spectral density estimation (SDE) or simply spectral estimation is to estimate the spectral density of a signal from a sequence of time samples of the signal. Intuitively speaking, the spectral density characterizes the frequency content of the signal. One purpose of estimating the spectral density is to detect any periodicities in the data, by observing peaks at the frequencies corresponding to these periodicities.

Digital image correlation and tracking is an optical method that employs tracking and image registration techniques for accurate 2D and 3D measurements of changes in images. This method is often used to measure full-field displacement and strains, and it is widely applied in many areas of science and engineering. Compared to strain gauges and extensometers, digital image correlation methods provide finer details about deformation, due to the ability to provide both local and average data.

In applied mathematics, the regressive discrete Fourier series (RDFS) is a generalization of the discrete Fourier transform where the Fourier series coefficients are computed in a least squares sense and the period is arbitrary, i.e., not necessarily equal to the length of the data. It was first proposed by Arruda (1992a, 1992b). It can be used to smooth data in one or more dimensions and to compute derivatives from the smoothed curve, surface, or hypersurface.

<span class="mw-page-title-main">Contrast transfer function</span>

The contrast transfer function (CTF) mathematically describes how aberrations in a transmission electron microscope (TEM) modify the image of a sample. This contrast transfer function (CTF) sets the resolution of high-resolution transmission electron microscopy (HRTEM), also known as phase contrast TEM.

<span class="mw-page-title-main">White light interferometry</span> Measurement technique

As described here, white light interferometry is a non-contact optical method for surface height measurement on 3D structures with surface profiles varying between tens of nanometers and a few centimeters. It is often used as an alternative name for coherence scanning interferometry in the context of areal surface topography instrumentation that relies on spectrally-broadband, visible-wavelength light.

The pupil function or aperture function describes how a light wave is affected upon transmission through an optical imaging system such as a camera, microscope, or the human eye. More specifically, it is a complex function of the position in the pupil or aperture that indicates the relative change in amplitude and phase of the light wave. Sometimes this function is referred to as the generalized pupil function, in which case pupil function only indicates whether light is transmitted or not. Imperfections in the optics typically have a direct effect on the pupil function, it is therefore an important tool to study optical imaging systems and their performance.

Optical coherence tomography (OCT) is a technique that displays images of the tissue by using the backscattered light.

In physics, a sinusoidal plane wave is a special case of plane wave: a field whose value varies as a sinusoidal function of time and of the distance from some fixed plane. It is also called a monochromatic plane wave, with constant frequency.

References

  1. The exact definition of resolution may vary and is often taken to be 1.22 times larger as defined by the Rayleigh criterion.
  2. 1 2 3 Williams, Charles S. (2002). Introduction to the Optical Transfer Function. SPIE – The International Society for Optical Engineering. ISBN   0-8194-4336-0.
  3. "Contrast Transfer Function" . Retrieved 16 November 2013.
  4. Macias-Garza, F.; Bovik, A.; Diller, K.; Aggarwal, S.; Aggarwal, J. (1988). "The missing cone problem and low-pass distortion in optical serial sectioning microscopy". ICASSP-88., International Conference on Acoustics, Speech, and Signal Processing. Vol. 2. pp. 890–893. doi:10.1109/ICASSP.1988.196731. S2CID   120191405.
  5. 1 2 3 Goodman, Joseph (2005). Introduction to Fourier Optics (3rd ed.). Roberts & Co Publishers. ISBN   0-9747077-2-4.
  6. Chapra, S.C.; Canale, R.P. (2006). Numerical Methods for Engineers (5th ed.). New York, New York: McGraw-Hill
  7. Sheppard, C.J.R.; Larkin, K. (1997). "Vectorial pupil functions and vectorial transfer functions" (PDF). Optik-Stuttgart. 107: 79–87.
  8. Arnison, M. R.; Sheppard, C. J. R. (2002). "A 3D vectorial optical transfer function suitable for arbitrary pupil functions". Optics Communications. 211 (1–6): 53–63. Bibcode:2002OptCo.211...53A. doi:10.1016/S0030-4018(02)01857-6.
  9. Holst, G.C. (1998). Testing and Evaluation of Infrared Imaging Systems (2nd ed.). Florida:JCD Publishing, Washington:SPIE.
  10. "Test and Measurement – Products – EOI". www.Electro-Optical.com. Archived from the original on 28 August 2008. Retrieved 2 January 2018.
  11. Mazzetta, J.A.; Scopatz, S.D. (2007). Automated Testing of Ultraviolet, Visible, and Infrared Sensors Using Shared Optics. Infrared Imaging Systems: Design Analysis, Modeling, and Testing XVIII, Vol. 6543, pp. 654313-1 654313-14
  12. "B2BVideoSource.com: Camera Terminology". www.B2BVideoSource.com. Retrieved 2 January 2018.