Speckle imaging comprises a range of high-resolution astronomical imaging techniques based on the analysis of large numbers of short exposures that freeze the variation of atmospheric turbulence. They can be divided into the shift-and-add ("image stacking") method and the speckle interferometry methods. These techniques can dramatically increase the resolution of ground-based telescopes, but are limited to bright targets.
The principle of all the techniques is to take very short exposure images of astronomical targets, and then process those so as to remove the effects of astronomical seeing. Use of these techniques led to a number of discoveries, including thousands of binary stars that would otherwise appear as a single star to a visual observer working with a similar-sized telescope, and the first images of sunspot-like phenomena on other stars. Many of the techniques remain in wide use today, notably when imaging relatively bright targets.
The resolution of a telescope is limited by the size of the main mirror, due to the effects of Fraunhofer diffraction. This results in images of distant objects being spread out to a small spot known as the Airy disk. A group of objects whose images are closer together than this limit appear as a single object. Thus larger telescopes can image not only dimmer objects (because they collect more light), but resolve objects that are closer together as well.
This improvement of resolution breaks down due to the practical limits imposed by the atmosphere, whose random nature disrupts the single spot of the Airy disk into a pattern of similarly-sized spots scattered over a much larger area (see the adjacent image of a binary). For typical seeing, the practical resolution limits are at mirror sizes much less than the mechanical limits for the size of mirrors, namely at a mirror diameter equal to the astronomical seeing parameter r0 – about 20 cm in diameter for observations with visible light under good conditions. For many years, telescope performance was limited by this effect, until the introduction of speckle interferometry and adaptive optics provided a means of removing this limitation.
Speckle imaging recreates the original image through image processing techniques. The key to the technique, found by the American astronomer David L. Fried in 1966, was to take very fast images in which case the atmosphere is effectively "frozen" in place. [1] At infrared wavelengths, coherence times τ0 are on the order of 100 ms, but for the visible region they drop to as little as 10 ms. When exposure times are shorter than τ0, the movement of the atmosphere is too sluggish to have an effect; the speckles recorded in the image are a snapshot of the atmospheric seeing at that instant. Coherence time τ0 = r0/v is a function of wavelength, because r0 is a function of wavelength.
The downside of the technique is that taking images at this short an exposure is difficult, and if the object is too dim, not enough light will be captured to make analysis possible. Early uses of the technique in the early 1970s were made on a limited scale using photographic techniques, but since photographic film captures only about 7% of the incoming light, only the brightest of objects could be viewed in this way. The introduction of the CCD into astronomy, which captures more than 70% of the light, lowered the bar on practical applications by an order of magnitude, and today the technique is widely used on bright astronomical objects (e.g. stars and star systems).
Many of the simpler speckle imaging methods have multiple names, largely from amateur astronomers re-inventing existing speckle imaging techniques and giving them new names.
Another use of the technique is in industry. By shining a laser (whose smooth wavefront is an excellent simulation of the light from a distant star) on a surface, the resulting speckle pattern can be processed to give detailed images of flaws in the material. [2]
The shift-and-add method (more recently "image-stacking" method) is a form of speckle imaging commonly used for obtaining high quality images from a number of short exposures with varying image shifts. [5] [6] It has been used in astronomy for several decades, and is the basis for the image stabilisation feature on some cameras. The short exposure images are aligned by using the brightest speckle and averaged to give a single output image. [7]
The method involves calculation of the differential shifts of the images. This is easily accomplished in astronomical images since they can be aligned with the stars. Once the images are aligned they are averaged together. It is a basic principle of statistics that variation in a sample can be reduced by averaging together the individual values. In fact, when using an average, the signal-to-noise ratio should be increased by a factor of the square root of the number of images. A number of software packages exist for performing this, including IRAF, RegiStax, Autostakkert, Keiths Image Stacker, Hugin, and Iris.
In the lucky imaging approach, only the best short exposures are selected for averaging. Early shift-and-add techniques aligned images according to the image centroid, giving a lower overall Strehl ratio.
In 1970, the French astronomer Antoine Labeyrie showed that Fourier analysis (speckle interferometry) can obtain information about the high-resolution structure of the object from the statistical properties of the speckle patterns. [8] This technique was first implemented in 1971 at Palomar Observatory (200-inch telescope) by Daniel Y. Gezari, Antoine Labeyrie and Robert V. Stachnick. [9] Methods developed in the 1980s allowed simple images to be reconstructed from this power spectrum information.
One more recent type of speckle interferometry called speckle masking' involves calculation of the bispectrum or closure phases from each of the short exposures. [10] The "average bispectrum" can then be calculated and then inverted to obtain an image. This works particularly well using aperture masks. In this arrangement the telescope aperture is blocked except for a few holes which allow light through, creating a small optical interferometer with better resolving power than the telescope would otherwise have. This aperture masking technique was pioneered by the Cavendish Astrophysics Group. [11] [12]
One limitation of the technique is that it requires extensive computer processing of the image, which was hard to come by when the technique was first developed. This limitation has faded away over the years as computing power has increased, and nowadays desktop computers have more than enough power to make such processing a trivial task.
Speckle imaging in biology refers to the underlabeling[ clarification needed ] of periodic cellular components (such as filaments and fibers) so that instead of appearing as a continuous and uniform structure, it appears as a discrete set of speckles. This is due to statistical distribution of the labeled component within unlabeled components. The technique, also known as dynamic speckle enables real-time monitoring of dynamical systems and video image analysis to understand biological processes.
All of these were obtained using infrared AO or IR interferometry (not speckle imaging) and have higher resolution than can be obtained with e.g. the Hubble Space Telescope. Speckle imaging can produce images with four times better resolution than these.
Astrophotography, also known as astronomical imaging, is the photography or imaging of astronomical objects, celestial events, or areas of the night sky. The first photograph of an astronomical object was taken in 1840, but it was not until the late 19th century that advances in technology allowed for detailed stellar photography. Besides being able to record the details of extended objects such as the Moon, Sun, and planets, modern astrophotography has the ability to image objects outside of the visible spectrum of the human eye such as dim stars, nebulae, and galaxies. This is accomplished through long time exposure as both film and digital cameras can accumulate and sum photons over long periods of time or using specialized optical filters which limit the photons to a certain wavelength.
Interferometry is a technique which uses the interference of superimposed waves to extract information. Interferometry typically uses electromagnetic waves and is an important investigative technique in the fields of astronomy, fiber optics, engineering metrology, optical metrology, oceanography, seismology, spectroscopy, quantum mechanics, nuclear and particle physics, plasma physics, biomolecular interactions, surface profiling, microfluidics, mechanical stress/strain measurement, velocimetry, optometry, and making holograms.
Adaptive optics (AO) is a technique of precisely deforming a mirror in order to compensate for light distortion. It is used in astronomical telescopes and laser communication systems to remove the effects of atmospheric distortion, in microscopy, optical fabrication and in retinal imaging systems to reduce optical aberrations. Adaptive optics works by measuring the distortions in a wavefront and compensating for them with a device that corrects those errors such as a deformable mirror or a liquid crystal array.
Angular resolution describes the ability of any image-forming device such as an optical or radio telescope, a microscope, a camera, or an eye, to distinguish small details of an object, thereby making it a major determinant of image resolution. It is used in optics applied to light waves, in antenna theory applied to radio waves, and in acoustics applied to sound waves. The colloquial use of the term "resolution" sometimes causes confusion; when an optical system is said to have a high resolution or high angular resolution, it means that the perceived distance, or actual angular distance, between resolved neighboring objects is small. The value that quantifies this property, θ, which is given by the Rayleigh criterion, is low for a system with a high resolution. The closely related term spatial resolution refers to the precision of a measurement with respect to space, which is directly connected to angular resolution in imaging instruments. The Rayleigh criterion shows that the minimum angular spread that can be resolved by an image-forming system is limited by diffraction to the ratio of the wavelength of the waves to the aperture width. For this reason, high-resolution imaging systems such as astronomical telescopes, long distance telephoto camera lenses and radio telescopes have large apertures.
An optical telescope is a telescope that gathers and focuses light mainly from the visible part of the electromagnetic spectrum, to create a magnified image for direct visual inspection, to make a photograph, or to collect data through electronic image sensors.
In astronomy, seeing is the degradation of the image of an astronomical object due to turbulence in the atmosphere of Earth that may become visible as blurring, twinkling or variable distortion. The origin of this effect is rapidly changing variations of the optical refractive index along the light path from the object to the detector. Seeing is a major limitation to the angular resolution in astronomical observations with telescopes that would otherwise be limited through diffraction by the size of the telescope aperture. Today, many large scientific ground-based optical telescopes include adaptive optics to overcome seeing.
In optics, any optical instrument or system – a microscope, telescope, or camera – has a principal limit to its resolution due to the physics of diffraction. An optical instrument is said to be diffraction-limited if it has reached this limit of resolution performance. Other factors may affect an optical system's performance, such as lens imperfections or aberrations, but these are caused by errors in the manufacture or calculation of a lens, whereas the diffraction limit is the maximum resolution possible for a theoretically perfect, or ideal, optical system.
Observational astronomy is a division of astronomy that is concerned with recording data about the observable universe, in contrast with theoretical astronomy, which is mainly concerned with calculating the measurable implications of physical models. It is the practice and study of observing celestial objects with the use of telescopes and other astronomical instruments.
Lucky imaging is one form of speckle imaging used for astrophotography. Speckle imaging techniques use a high-speed camera with exposure times short enough so that the changes in the Earth's atmosphere during the exposure are minimal.
The point spread function (PSF) describes the response of a focused optical imaging system to a point source or point object. A more general term for the PSF is the system's impulse response; the PSF is the impulse response or impulse response function (IRF) of a focused optical imaging system. The PSF in many contexts can be thought of as the extended blob in an image that represents a single point object, that is considered as a spatial impulse. In functional terms, it is the spatial domain version of the optical transfer function (OTF) of an imaging system. It is a useful concept in Fourier optics, astronomical imaging, medical imaging, electron microscopy and other imaging techniques such as 3D microscopy and fluorescence microscopy.
Aperture synthesis or synthesis imaging is a type of interferometry that mixes signals from a collection of telescopes to produce images having the same angular resolution as an instrument the size of the entire collection. At each separation and orientation, the lobe-pattern of the interferometer produces an output which is one component of the Fourier transform of the spatial distribution of the brightness of the observed object. The image of the source is produced from these measurements. Astronomical interferometers are commonly used for high-resolution optical, infrared, submillimetre and radio astronomy observations. For example, the Event Horizon Telescope project derived the first image of a black hole using aperture synthesis.
Aperture masking interferometry is a form of speckle interferometry, that allows diffraction limited imaging from ground-based telescopes, and is a high contrast imaging mode on the James Webb Space Telescope. This technique allows ground-based telescopes to reach the maximum possible resolution, allowing ground-based telescopes with large diameters to produce far greater resolution than the Hubble Space Telescope. A mask is placed over the telescope which only allows light through a small number of holes. This array of holes acts as a miniature astronomical interferometer. The principal limitation of the technique is that it is applicable only to relatively bright astronomical objects, since the mask discards most of the light received from the astronomical source. The method was developed by John E. Baldwin and collaborators in the Cavendish Astrophysics Group at the University of Cambridge in the late 1980s.
The BTA-6 is a 6-metre (20 ft) aperture optical telescope at the Special Astrophysical Observatory located in the Zelenchuksky District of Karachay-Cherkessia on the north side of the Caucasus Mountains in southern Russia.
Antoine Émile Henry Labeyrie is a French astronomer, who held the Observational astrophysics chair at the Collège de France between 1991 and 2014, where he is currently professor emeritus. He is working with the Hypertelescope Lise association, which aims to develop an extremely large astronomical interferometer with spherical geometry that might theoretically show features on Earth-like worlds around other suns, as its president. He is a member of the French Academy of Sciences in the Sciences of the Universe section. Between 1995 and 1999 he was director of the Haute-Provence Observatory.
An astronomical interferometer or telescope array is a set of separate telescopes, mirror segments, or radio telescope antennas that work together as a single telescope to provide higher resolution images of astronomical objects such as stars, nebulas and galaxies by means of interferometry. The advantage of this technique is that it can theoretically produce images with the angular resolution of a huge telescope with an aperture equal to the separation, called baseline, between the component telescopes. The main drawback is that it does not collect as much light as the complete instrument's mirror. Thus it is mainly useful for fine resolution of more luminous astronomical objects, such as close binary stars. Another drawback is that the maximum angular size of a detectable emission source is limited by the minimum gap between detectors in the collector array.
Holographic interferometry (HI) is a technique which enables the measurements of static and dynamic displacements of objects with optically rough surfaces at optical interferometric precision. These measurements can be applied to stress, strain and vibration analysis, as well as to non-destructive testing and radiation dosimetry. It can also be used to detect optical path length variations in transparent media, which enables, for example, fluid flow to be visualised and analyzed. It can also be used to generate contours representing the form of the surface.
A point diffraction interferometer (PDI) is a type of common-path interferometer. Unlike an amplitude-splitting interferometer, such as a Michelson interferometer, which separates out an unaberrated beam and interferes this with the test beam, a common-path interferometer generates its own reference beam. In PDI systems, the test and reference beams travel the same or almost the same path. This design makes the PDI extremely useful when environmental isolation is not possible or a reduction in the number of precision optics is required. The reference beam is created from a portion of the test beam by diffraction from a small pinhole in a semitransparent coating. The principle of a PDI is shown in Figure 1.
In optical astronomy, interferometry is used to combine signals from two or more telescopes to obtain measurements with higher resolution than could be obtained with either telescopes individually. This technique is the basis for astronomical interferometer arrays, which can make measurements of very small astronomical objects if the telescopes are spread out over a wide area. If a large number of telescopes are used a picture can be produced which has resolution similar to a single telescope with the diameter of the combined spread of telescopes. These include radio telescope arrays such as VLA, VLBI, SMA, astronomical optical interferometer arrays such as COAST, NPOI and IOTA, resulting in the highest resolution optical images ever achieved in astronomy. The VLT Interferometer is expected to produce its first images using aperture synthesis soon, followed by other interferometers such as the CHARA array and the Magdalena Ridge Observatory Interferometer which may consist of up to 10 optical telescopes. If outrigger telescopes are built at the Keck Interferometer, it will also become capable of interferometric imaging.
The Fried parameter or Fried's coherence length is a measure of the quality of optical transmission through the atmosphere due to random inhomogeneities in the atmosphere's refractive index. In practice, such inhomogeneities are primarily due to tiny variations in temperature on smaller spatial scales resulting from random turbulent mixing of larger temperature variations on larger spatial scales as first described by Kolmogorov. The Fried parameter has units of length and is typically expressed in centimeters. It is defined as the diameter of a circular area over which the rms wavefront aberration due to passage through the atmosphere is equal to 1 radian, and typical values relevant to astronomy are in the tens of centimeters depending on atmospheric conditions. For a telescope with an aperture, , the smallest spot that can be observed is given by the telescope's Point spread function (PSF). Atmospheric turbulence increases the diameter of the smallest spot by a factor approximately . As such, imaging from telescopes with apertures much smaller than is less affected by atmospheric seeing than diffraction due to the telescope's small aperture. However, the imaging resolution of telescopes with apertures much larger than will be limited by the turbulent atmosphere, preventing the instruments from approaching the diffraction limit.
Kernel-phases are observable quantities used in high resolution astronomical imaging used for superresolution image creation. It can be seen as a generalization of closure phases for redundant arrays. For this reason, when the wavefront quality requirement are met, it is an alternative to aperture masking interferometry that can be executed without a mask while retaining phase error rejection properties. The observables are computed through linear algebra from the Fourier transform of direct images. They can then be used for statistical testing, model fitting, or image reconstruction.