Phase-contrast imaging

Last updated

Phase-contrast imaging is a method of imaging that has a range of different applications. It measures differences in the refractive index of different materials to differentiate between structures under analysis. In conventional light microscopy, phase contrast can be employed to distinguish between structures of similar transparency, and to examine crystals on the basis of their double refraction. This has uses in biological, medical and geological science. In X-ray tomography, the same physical principles can be used to increase image contrast by highlighting small details of differing refractive index within structures that are otherwise uniform. In transmission electron microscopy (TEM), phase contrast enables very high resolution (HR) imaging, making it possible to distinguish features a few Angstrom apart (at this point highest resolution is 40 pm [1] ).

Contents

Atomic Physics

Phase Contrast imaging is commonly used in atomic physics to describe a range of techniques for dispersively imaging Ultracold atoms. Dispersion is the phenomena of the propagation of electromagnetic fields (light) in matter. In general, the Refractive index of a material, which alters the Phase velocity and Refraction of the field, depends on the Wavelength or Frequency of the light. This is what gives rise to the familiar behavior of prisms, which are seen to split light into its constituent wavelengths. Microscopically, we may think of this behavior as arising from the interaction of the electromagnetic wave with the atomic dipoles. The oscillating force field in turn causes the dipoles to oscillate and in doing so reradiate light with the same polarization and frequency, albeit delayed or phase-shifted from the incident wave. These waves interfere to produce the altered wave which propagates through the medium. If the light is monochromatic (that is, an electromagnetic wave of a single frequency or wavelength), with a frequency close to an atomic transition, the atom will also absorb photons from the light field, reducing the amplitude of the incident wave. Mathematically, these two interaction mechanisms (dispersive and absorptive) are commonly written as the real and complex parts, respectively, of a Complex refractive index.

Dispersive imaging refers strictly to the measurement of the real part of the refractive index. In phase contrast-imaging, a monochromatic probe field is detuned far away from any atomic transitions to minimize absorption and shone onto an atomic medium (such as a Bose-condensed gas). Since absorption is minimized, the only effect of the gas on the light is to alter the phase of various points along its wavefront. If we write the incident electromagnetic field as

then the effect of the medium is to phase shift the wave by some amount which is in general a function of in the plane of the object (unless the object is of homogenous density, i.e. of constant index of refraction), where we assume the phase shift to be small, such that we can neglect refractive effects:


We may think of this wave as a superposition of smaller bundles of waves each with a corresponding phase shift :

where is a normalization constant and the integral is over the area of the object plane. Since is assumed to be small, we may expand that part of the exponential to first order such that

where represents the integral over all small changes in phase to the wavefront due to each point in the area of the object. Looking at the real part of this expression, we find the sum of a wave with the original unshifted phase , with a wave that is out of phase and has very small amplitude . As written, this is simply another complex wave with phase

Since imaging systems see only changes in the intensity of the electromagnetic waves, which is proportional to the square of the electric field, we have . We see that both the incident wave and the phase shifted wave are equivalent in this respect. Such objects, which only impart phase changes to light which pass through them, are commonly referred to as phase objects, and are for this reason invisible to any imaging system. However, if we look more closely at the real part of our phase shifted wave

and suppose we could shift the term unaltered by the phase object (the cosine term) by , such that , then we have

The phase shifts due to the phase object are effectively converted into amplitude fluctuations of a single wave. These would be detectable by an imaging system since the intensity is now . This is the basis of the idea of phase contrast imaging. [2] As an example, consider the setup shown in the figure on the right.

A schematic illustrating the ray optics of phase contrast imaging. Phase contrast imaging schematic.png
A schematic illustrating the ray optics of phase contrast imaging.

A probe laser is incident on a phase object. This could be an atomic medium such as a Bose-Einstein Condensate. [3] The laser light is detuned far from any atomic resonance, such that the phase object only alters the phase of various points along the portion of the wavefront which pass through the object. The rays which pass through the phase object will diffract as a function of the index of refraction of the medium and diverge as shown by the dotted lines in the figure. The objective lens collimates this light, while focusing the so-called 0-order light, that is, the portion of the beam unaltered by the phase object (solid lines). This light comes to a focus in the focal plane of the objective lens, where a Phase plate can be positioned to delay only the phase of the 0-order beam, bringing it back into phase with the diffracted beam and converting the phase alterations in the diffracted beam into intensity fluctuations at the imaging plane. The phase plate is usually a piece of glass with a raised center encircled by a shallower etch, such that light passing through the center is delayed in phase relative to that passing through the edges.

Polarization Contrast Imaging (Faraday Imaging)

In polarization contrast imaging, the Faraday effect of the light-matter interaction is leveraged to image the cloud using a standard absorption imaging setup altered with a far detuned probe beam and an extra polarizer. The Faraday effect rotates a linear probe beam polarization as it passes through a cloud polarized by a strong magnetic field in the propagation direction of the probe beam.

Classically, a linearly polarized probe beam may be thought of as a superposition of two oppositely handed, circularly polarized beams. The interaction between the rotating magnetic field of each probe beam interacts with the magnetic dipoles of atoms in the sample. If the sample is magnetically polarized in a direction with non-zero projection onto the light field k-vector, the two circularly polarized beams will interact with the magnetic dipoles of the sample with different strengths, corresponding to a relative phase shift between the two beams. This phase shift in turns maps to a rotation of the input beam linear polarization.

The quantum physics of the Faraday interaction may be described by the interaction of the second quantized Stokes parameters describing the polarization of a probe light field with the total angular momentum state of the atoms. Thus, if a BEC or other cold, dense sample of atoms is prepared in a particular spin (hyperfine) state polarized parallel to the imaging light propagation direction, both the density and change in spin state may be monitored by feeding the transmitted probe beam through a beam splitter before imaging onto a camera sensor. By adjusting the polarizer optic axis relative to the input linear polarization one can switch between a dark field scheme (zero light in the absence of atoms), and variable phase contrast imaging. [4] [5] [6]

Dark-field and other methods

In addition to phase-contrast, there are a number of other similar dispersive imaging methods. In the dark-field method, [7] the aforementioned phase plate is made completely opaque, such that the 0-order contribution to the beam is totally removed. In the absence of any imaging object the image plane would be dark. This amounts to removing the factor of 1 in the equation

from above. Comparing the squares of the two equations one will find that in the case of dark-ground, the range of contrast (or dynamic range of the intensity signal) is actually reduced. For this reason this method has fallen out of use.

In the defocus-contrast method, [8] [9] the phase plate is replaced by a defocusing of the objective lens. Doing so breaks the equivalence of parallel ray path lengths such that a relative phase is acquired between parallel rays. By controlling the amount of defocusing one can thus achieve an effect similar to that of the phase plate in standard phase-contrast. In this case however the defocusing scrambles the phase and amplitude modulation of the diffracted rays from the object in such a way that does not capture the exact phase information of the object, but produces an intensity signal proportional to the amount of phase noise in the object.

There is also another method, called bright-field balanced (BBD) method. This method leverages the complementary intensity changes of transmitted disks at different scattering angles that provide straightforward, dose-efficient, and noise-robust phase imaging from atomic resolution to intermediate length scales, such as both light and heavy atomic columns and nanoscale magnetic phases in FeGe samples. [10]

Light microscopy

Phase contrast takes advantage of the fact that different structures have different refractive indices, and either bend, refract or delay the light passage through the sample by different amounts. The changes in the light passage result in waves being 'out of phase' with others. This effect can be transformed by phase contrast microscopes into amplitude differences that are observable in the eyepieces and are depicted effectively as darker or brighter areas of the resultant image.[ citation needed ]

Phase contrast is used extensively in optical microscopy, in both biological and geological sciences. In biology, it is employed in viewing unstained biological samples, making it possible to distinguish between structures that are of similar transparency or refractive indices.

In geology, phase contrast is exploited to highlight differences between mineral crystals cut to a standardised thin section (usually 30  μm) and mounted under a light microscope. Crystalline materials are capable of exhibiting double refraction, in which light rays entering a crystal are split into two beams that may exhibit different refractive indices, depending on the angle at which they enter the crystal. The phase contrast between the two rays can be detected with the human eye using particular optical filters. As the exact nature of the double refraction varies for different crystal structures, phase contrast aids in the identification of minerals.

X-ray imaging

X-ray phase-contrast image of spider Phase-contrast x-ray image of spider.jpg
X-ray phase-contrast image of spider

There are four main techniques for X-ray phase-contrast imaging, which use different principles to convert phase variations in the X-rays emerging from the object into intensity variations at an X-ray detector. [11] [12] Propagation-based phase contrast [13] uses free-space propagation to get edge enhancement, Talbot and polychromatic far-field interferometry [12] [14] [15] uses a set of diffraction gratings to measure the derivative of the phase, refraction-enhanced imaging [16] uses an analyzer crystal also for differential measurement, and x-ray interferometry [17] uses a crystal interferometer to measure the phase directly. The advantages of these methods compared to normal absorption-contrast X-ray imaging is higher contrast for low-absorbing materials (because phase shift is a different mechanism than absorption) and a contrast-to-noise relationship that increases with spatial frequency (because many phase-contrast techniques detect the first or second derivative of the phase shift), which makes it possible to see smaller details [15] One disadvantage is that these methods require more sophisticated equipment, such as synchrotron or microfocus X-ray sources, x-ray optics, and high resolution X-ray detectors. This sophisticated equipment provides the sensitivity required to differentiate between small variations in the refractive index of X-rays passing through different media. The refractive index is normally smaller than 1 with a difference from 1 between 10−7 and 10−6.

All of these methods produce images that can be used to calculate the projections (integrals) of the refractive index in the imaging direction. For propagation-based phase contrast there are phase-retrieval algorithms, for Talbot interferometry and refraction-enhanced imaging the image is integrated in the proper direction, and for X-ray interferometry phase unwrapping is performed. For this reason they are well suited for tomography, i.e. reconstruction of a 3D-map of the refractive index of the object from many images at slightly different angles. For X-ray radiation the difference from 1 of the refractive index is essentially proportional to the density of the material.

Synchrotron X-ray tomography can employ phase contrast imaging to enable imaging of the interior surfaces of objects. In this context, phase contrast imaging is used to enhance the contrast that would normally be possible from conventional radiographic imaging. A difference in the refractive index between a detail and its surroundings causes a phase shift between the light wave that travels through the detail and that which travels outside the detail. An interference pattern results, marking out the detail. [18]

This method has been used to image Precambrian metazoan embryos from the Doushantuo Formation in China, allowing the internal structure of delicate microfossils to be imaged without destroying the original specimen. [19]

Transmission electron microscopy

In the field of transmission electron microscopy, phase-contrast imaging may be employed to image columns of individual atoms. This ability arises from the fact that the atoms in a material diffract electrons as the electrons pass through them (the relative phases of the electrons change upon transmission through the sample), causing diffraction contrast in addition to the already present contrast in the transmitted beam. Phase-contrast imaging is the highest resolution imaging technique ever developed, and can allow for resolutions of less than one angstrom (less than 0.1 nanometres). It thus enables the direct viewing of columns of atoms in a crystalline material. [20] [21]

The interpretation of phase-contrast images is not a straightforward task. Deconvolving the contrast seen in an HR image to determine which features are due to which atoms in the material can rarely, if ever, be done by eye. Instead, because the combination of contrasts due to multiple diffracting elements and planes and the transmitted beam is complex, computer simulations are used to determine what sort of contrast different structures may produce in a phase-contrast image. Thus, a reasonable amount of information about the sample needs to be understood before a phase contrast image can be properly interpreted, such as a conjecture as to what crystal structure the material has.

Phase-contrast images are formed by removing the objective aperture entirely or by using a very large objective aperture. This ensures that not only the transmitted beam, but also the diffracted ones are allowed to contribute to the image. Instruments that are specifically designed for phase-contrast imaging are often called HRTEMs (high resolution transmission electron microscopes), and differ from analytical TEMs mainly in the design of the electron beam column. Whereas analytical TEMs employ additional detectors attached to the column for spectroscopic measurements, HRTEMs have little or no additional attachments so as to ensure a uniform electromagnetic environment all the way down the column for each beam leaving the sample (transmitted and diffracted). Because phase-contrast imaging relies on differences in phase between electrons leaving the sample, any additional phase shifts that occur between the sample and the viewing screen can make the image impossible to interpret. Thus, a very low degree of lens aberration is also a requirement for HRTEMs, and advances in spherical aberration (Cs) correction have enabled a new generation of HRTEMs to reach resolutions once thought impossible.

See also

Related Research Articles

In physics, the cross section is a measure of the probability that a specific process will take place when some kind of radiant excitation intersects a localized phenomenon. For example, the Rutherford cross-section is a measure of probability that an alpha particle will be deflected by a given angle during an interaction with an atomic nucleus. Cross section is typically denoted σ (sigma) and is expressed in units of area, more specifically in barns. In a way, it can be thought of as the size of the object that the excitation must hit in order for the process to occur, but more exactly, it is a parameter of a stochastic process.

<span class="mw-page-title-main">Diffraction</span> Phenomenon of the motion of waves

Diffraction is the interference or bending of waves around the corners of an obstacle or through an aperture into the region of geometrical shadow of the obstacle/aperture. The diffracting object or aperture effectively becomes a secondary source of the propagating wave. Italian scientist Francesco Maria Grimaldi coined the word diffraction and was the first to record accurate observations of the phenomenon in 1660.

<span class="mw-page-title-main">Wave interference</span> Phenomenon resulting from the superposition of two waves

In physics, interference is a phenomenon in which two coherent waves are combined by adding their intensities or displacements with due consideration for their phase difference. The resultant wave may have greater intensity or lower amplitude if the two waves are in phase or out of phase, respectively. Interference effects can be observed with all types of waves, for example, light, radio, acoustic, surface water waves, gravity waves, or matter waves as well as in loudspeakers as electrical waves.

In optics, polarized light can be described using the Jones calculus, discovered by R. C. Jones in 1941. Polarized light is represented by a Jones vector, and linear optical elements are represented by Jones matrices. When light crosses an optical element the resulting polarization of the emerging light is found by taking the product of the Jones matrix of the optical element and the Jones vector of the incident light. Note that Jones calculus is only applicable to light that is already fully polarized. Light which is randomly polarized, partially polarized, or incoherent must be treated using Mueller calculus.

In electrodynamics, elliptical polarization is the polarization of electromagnetic radiation such that the tip of the electric field vector describes an ellipse in any fixed plane intersecting, and normal to, the direction of propagation. An elliptically polarized wave may be resolved into two linearly polarized waves in phase quadrature, with their polarization planes at right angles to each other. Since the electric field can rotate clockwise or counterclockwise as it propagates, elliptically polarized waves exhibit chirality.

In electrodynamics, linear polarization or plane polarization of electromagnetic radiation is a confinement of the electric field vector or magnetic field vector to a given plane along the direction of propagation. The term linear polarization was coined by Augustin-Jean Fresnel in 1822. See polarization and plane of polarization for more information.

<span class="mw-page-title-main">Waveplate</span> Optical polarization device

A waveplate or retarder is an optical device that alters the polarization state of a light wave travelling through it. Two common types of waveplates are the half-wave plate, which shifts the polarization direction of linearly polarized light, and the quarter-wave plate, which converts linearly polarized light into circularly polarized light and vice versa. A quarter-wave plate can be used to produce elliptical polarization as well.

In physics, angular velocity, also known as angular frequency vector, is a pseudovector representation of how fast the angular position or orientation of an object changes with time. The magnitude of the pseudovector represents the angular speed, the rate at which the object rotates or revolves, and its direction is normal to the instantaneous plane of rotation or angular displacement. The orientation of angular velocity is conventionally specified by the right-hand rule.

Fourier optics is the study of classical optics using Fourier transforms (FTs), in which the waveform being considered is regarded as made up of a combination, or superposition, of plane waves. It has some parallels to the Huygens–Fresnel principle, in which the wavefront is regarded as being made up of a combination of spherical wavefronts whose sum is the wavefront being studied. A key difference is that Fourier optics considers the plane waves to be natural modes of the propagation medium, as opposed to Huygens–Fresnel, where the spherical waves originate in the physical medium.

Particle displacement or displacement amplitude is a measurement of distance of the movement of a sound particle from its equilibrium position in a medium as it transmits a sound wave. The SI unit of particle displacement is the metre (m). In most cases this is a longitudinal wave of pressure, but it can also be a transverse wave, such as the vibration of a taut string. In the case of a sound wave travelling through air, the particle displacement is evident in the oscillations of air molecules with, and against, the direction in which the sound wave is travelling.

In physics, a wave vector is a vector used in describing a wave, with a typical unit being cycle per metre. It has a magnitude and direction. Its magnitude is the wavenumber of the wave, and its direction is perpendicular to the wavefront. In isotropic media, this is also the direction of wave propagation.

Geometrical optics, or ray optics, is a model of optics that describes light propagation in terms of rays. The ray in geometrical optics is an abstraction useful for approximating the paths along which light propagates under certain circumstances.

Etendue or étendue is a property of light in an optical system, which characterizes how "spread out" the light is in area and angle. It corresponds to the beam parameter product (BPP) in Gaussian beam optics. Other names for etendue include acceptance, throughput, light grasp, light-gathering power, optical extent, and the AΩ product. Throughput and AΩ product are especially used in radiometry and radiative transfer where it is related to the view factor. It is a central concept in nonimaging optics.

The electromagnetic wave equation is a second-order partial differential equation that describes the propagation of electromagnetic waves through a medium or in a vacuum. It is a three-dimensional form of the wave equation. The homogeneous form of the equation, written in terms of either the electric field E or the magnetic field B, takes the form:

Sinusoidal plane-wave solutions are particular solutions to the electromagnetic wave equation.

The theoretical and experimental justification for the Schrödinger equation motivates the discovery of the Schrödinger equation, the equation that describes the dynamics of nonrelativistic particles. The motivation uses photons, which are relativistic particles with dynamics described by Maxwell's equations, as an analogue for all types of particles.

In geometry, various formalisms exist to express a rotation in three dimensions as a mathematical transformation. In physics, this concept is applied to classical mechanics where rotational kinematics is the science of quantitative description of a purely rotational motion. The orientation of an object at a given instant is described with the same tools, as it is defined as an imaginary rotation from a reference placement in space, rather than an actually observed rotation from a previous placement in space.

<span class="mw-page-title-main">Stokes drift</span> Average velocity of a fluid parcel in a gravity wave

For a pure wave motion in fluid dynamics, the Stokes drift velocity is the average velocity when following a specific fluid parcel as it travels with the fluid flow. For instance, a particle floating at the free surface of water waves, experiences a net Stokes drift velocity in the direction of wave propagation.

In mathematics, vector spherical harmonics (VSH) are an extension of the scalar spherical harmonics for use with vector fields. The components of the VSH are complex-valued functions expressed in the spherical coordinate basis vectors.

An electric dipole transition is the dominant effect of an interaction of an electron in an atom with the electromagnetic field.

References

  1. Jiang Y, Chen Z, Han Y, Deb P, Gao H, Xie S, et al. (July 2018). "Electron ptychography of 2D materials to deep sub-ångström resolution". Nature. 559 (7714): 343–349. doi:10.1038/s41467-020-16688-6. PMC   7293311 . PMID   30022131. S2CID   256635452.
  2. Hecht, Eugene (2017). Optics (5 ed.). Pearson. p. 647. ISBN   978-1-292-09693-3.
  3. Andrews, M.R. (1996-07-05). "Direct, Nondestructive Observation of a Bose Condensate". Science. 273 (5271): 84–87. Bibcode:1996Sci...273...84A. doi:10.1126/science.273.5271.84. PMID   8688055. S2CID   888479.
  4. Julsgaard, Brian (2004). "Experimental demonstration of quantum memory for light". Nature. 432 (7016): 482–486. arXiv: quant-ph/0410072 . Bibcode:2004Natur.432..482J. doi:10.1038/nature03064. PMID   15565148. S2CID   4423785.
  5. Bradley, C. C. (1997). "Bose-Einstein Condensation of Lithium: Observation of Limited Condensate Number". Physical Review Letters. 78 (6): 985–989. Bibcode:1997PhRvL..78..985B. doi:10.1103/PhysRevLett.78.985.
  6. Gajdacz, Miroslav (2013). "Non-destructive Faraday Imaging of dynamically controlled ultracold atoms". Review of Scientific Instruments. 84 (8): 083105–083105–8. arXiv: 1301.3018 . Bibcode:2013RScI...84h3105G. doi:10.1063/1.4818913. PMID   24007051. S2CID   766468.
  7. Hecht, Eugene (2017). Optics (5 ed.). Pearson. p. 651. ISBN   978-1-292-09693-3.
  8. Turner, L.D. (2004). "Off-resonant defocus-contrast imaging of cold atoms". Optics Letters. 29 (3): 232–234. Bibcode:2004OptL...29..232T. doi:10.1364/OL.29.000232. PMID   14759035.
  9. Sanner, Christian (2011). "Speckle Imaging of Spin Fluctuations in a Strongly Interacting Fermi Gas". Physical Review Letters. 106 (1): 010402. arXiv: 1010.1874 . Bibcode:2011PhRvL.106a0402S. doi:10.1103/PhysRevLett.106.010402. PMID   21231722. S2CID   2841337.
  10. Wang, Binbin, and David W. McComb. "Phase imaging in scanning transmission electron microscopy using bright-field balanced divergency method." Ultramicroscopy 245 (2023): 113665. https://doi.org/10.1016/j.ultramic.2022.113665
  11. Fitzgerald R (2000). "Phase-sensitive x-ray imaging". Physics Today. 53 (7): 23–26. Bibcode:2000PhT....53g..23F. doi: 10.1063/1.1292471 . S2CID   121322301.
  12. 1 2 David C, Nohammer B, Solak HH, Ziegler E (2002). "Differential x-ray phase contrast imaging using a shearing interferometer". Applied Physics Letters. 81 (17): 3287–3289. Bibcode:2002ApPhL..81.3287D. doi: 10.1063/1.1516611 .
  13. Wilkins SW, Gureyev TE, Gao D, Pogany A, Stevenson AW (1996). "Phase-contrast imaging using polychromatic hard X-rays". Nature. 384 (6607): 335–338. Bibcode:1996Natur.384..335W. doi:10.1038/384335a0. S2CID   4273199.
  14. Miao H, Panna A, Gomella AA, Bennett EE, Znati S, Chen L, Wen H (2016). "A Universal Moiré Effect and Application in X-Ray Phase-Contrast Imaging". Nature Physics. 12 (9): 830–834. Bibcode:2016NatPh..12..830M. doi:10.1038/nphys3734. PMC   5063246 . PMID   27746823.
  15. 1 2 Fredenberg E, Danielsson M, Stayman JW, Siewerdsen JH, Aslund M (September 2012). "Ideal-observer detectability in photon-counting differential phase-contrast imaging using a linear-systems approach". Medical Physics. 39 (9): 5317–35. Bibcode:2012MedPh..39.5317F. doi:10.1118/1.4739195. PMC   3427340 . PMID   22957600.
  16. Davis TJ, Gao D, Gureyev TE, Stevenson AW, Wilkins SW (1995). "Phase-contrast imaging of weakly absorbing materials using hard X-rays". Nature. 373 (6515): 595–598. Bibcode:1995Natur.373..595D. doi:10.1038/373595a0. S2CID   4287341.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  17. Momose A, Takeda T, Itai Y, Hirano K (April 1996). "Phase-contrast X-ray computed tomography for observing biological soft tissues". Nature Medicine. 2 (4): 473–5. doi:10.1038/nm0496-473. PMID   8597962. S2CID   23523144.
  18. "Phase Contrast Imaging". UCL Department of Medical Physics and Bioengineering Radiation Physics Group. Archived from the original on 28 September 2011. Retrieved 2011-07-19.
  19. Chen JY, Bottjer DJ, Davidson EH, Li G, Gao F, Cameron RA, et al. (September 2009). "Phase contrast synchrotron X-ray microtomography of Ediacaran (Doushantuo) metazoan microfossils: Phylogenetic diversity and evolutionary implications". Precambrian Research. 173 (1–4): 191–200. Bibcode:2009PreR..173..191C. doi:10.1016/j.precamres.2009.04.004.
  20. Williams DB, Carter CB (2009). Transmission Electron Microscopy: A Textbook for Materials Science. Springer, Boston, MA. doi:10.1007/978-0-387-76501-3. ISBN   978-0-387-76500-6.
  21. Fultz B, Howe JM (2013). Transmission Electron Microscopy and Diffractometry of Materials. Graduate Texts in Physics. Springer-Verlag Berlin Heidelberg. Bibcode:2013temd.book.....F. doi:10.1007/978-3-642-29761-8. ISBN   978-3-642-29760-1.