Fluorescence interference contrast (FLIC) microscopy is a microscopic technique developed to achieve z-resolution on the nanometer scale.
FLIC occurs whenever fluorescent objects are in the vicinity of a reflecting surface (e.g. Si wafer). The resulting interference between the direct and the reflected light leads to a double sin2 modulation of the intensity, I, of a fluorescent object as a function of distance, h, above the reflecting surface. This allows for the nanometer height measurements.
FLIC microscope is well suited to measuring the topography of a membrane that contains fluorescent probes e.g. an artificial lipid bilayer, or a living cell membrane or the structure of fluorescently labeled proteins on a surface.
The optical theory underlying FLIC was developed by Armin Lambacher and Peter Fromherz. They derived a relationship between the observed fluorescence intensity and the distance of the fluorophore from a reflective silicon surface.
The observed fluorescence intensity, , is the product of the excitation probability per unit time, , and the probability of measuring an emitted photon per unit time, . Both probabilities are a function of the fluorophore height above the silicon surface, so the observed intensity will also be a function of the fluorophore height. The simplest arrangement to consider is a fluorophore embedded in silicon dioxide (refractive index ) a distance d from an interface with silicon (refractive index ). The fluorophore is excited by light of wavelength and emits light of wavelength . The unit vector gives the orientation of the transition dipole of excitation of the fluorophore. is proportional to the squared projection of the local electric field, , which includes the effects of interference, on the direction of the transition dipole.
The local electric field, , at the fluorophore is affected by interference between the direct incident light and the light reflecting off the silicon surface. The interference is quantified by the phase difference given by
is the angle of the incident light with respect to the silicon plane normal. Not only does interference modulate , but the silicon surface does not perfectly reflect the incident light. Fresnel coefficients give the change in amplitude between an incident and reflected wave. The Fresnel coefficients depend on the angles of incidence, and , the indices of refraction of the two mediums and the polarization direction. The angles and can be related by Snell's Law. The expressions for the reflection coefficients are:
TE refers to the component of the electric field perpendicular to the plane of incidence and TM to the parallel component (The incident plane is defined by the plane normal and the propagation direction of the light). In cartesian coordinates, the local electric field is
is the polarization angle of the incident light with respect to the plane of incidence. The orientation of the excitation dipole is a function of its angle to the normal and azimuthal to the plane of incidence.
The above two equations for and can be combined to give the probability of exciting the fluorophore per unit time .
Many of the parameters used above would vary in a normal experiment. The variation in the five following parameters should be included in this theoretical description.
The squared projection must be averaged over these quantities to give the probability of excitation . Averaging over the first 4 parameters gives
Normalization factors are not included. is a distribution of the orientation angle of the fluorophore dipoles. The azimuthal angle and the polarization angle are integrated over analytically, so they no longer appear in the above equation. To finally obtain the probability of excitation per unit time, the above equation is integrated over the spread in excitation wavelength, accounting for the intensity and the extinction coefficient of the fluorophore .
The steps to calculate are equivalent to those above in calculating except that the parameter labels em are replaced with ex and in is replaced with out.
The resulting fluorescence intensity measured is proportional to the product of the excitation probability and emission probability
It is important to note that this theory determines a proportionality relation between the measured fluorescence intensity and the distance of the fluorophore above the reflective surface. The fact that it is not an equality relation will have a significant effect on the experimental procedure.
A silicon wafer is typically used as the reflective surface in a FLIC experiment. An oxide layer is then thermally grown on top of the silicon wafer to act as a spacer. On top of the oxide is placed the fluorescently labeled specimen, such as a lipid membrane, a cell or membrane bound proteins. With the sample system built, all that is needed is an epifluorescence microscope and a CCD camera to make quantitative intensity measurements.
The silicon dioxide thickness is very important in making accurate FLIC measurements. As mentioned before, the theoretical model describes the relative fluorescence intensity measured versus the fluorophore height. The fluorophore position cannot be simply read off of a single measured FLIC curve. The basic procedure is to manufacture the oxide layer with at least two known thicknesses (the layer can be made with photolithographic techniques and the thickness measured by ellipsometry). The thicknesses used depends on the sample being measured. For a sample with fluorophore height in the range of 10 nm, oxide thickness around 50 nm would be best because the FLIC intensity curve is steepest here and would produce the greatest contrast between fluorophore heights. Oxide thickness above a few hundred nanometers could be problematic because the curve begins to get smeared out by polychromatic light and a range of incident angles. A ratio of measured fluorescence intensities at different oxide thicknesses is compared to the predicted ratio to calculate the fluorophore height above the oxide ().
The above equation can then be solved numerically to find . Imperfections of the experiment, such as imperfect reflection, nonnormal incidence of light and polychromatic light tend to smear out the sharp fluorescence curves. The spread in incidence angle can be controlled by the numerical aperture (N.A.). However, depending on the numerical aperture used, the experiment will yield good lateral resolution (x-y) or good vertical resolution (z), but not both. A high N.A. (~1.0) gives good lateral resolution which is best if the goal is to determine long range topography. Low N.A. (~0.001), on the other hand, provides accurate z-height measurement to determine the height of a fluorescently labeled molecule in a system.
The basic analysis involves fitting the intensity data with the theoretical model allowing the distance of the fluorophore above the oxide surface () to be a free parameter. The FLIC curves shift to the left as the distance of the fluorophore above the oxide increases. is usually the parameter of interest, but several other free parameters are often included to optimize the fit. Normally an amplitude factor (a) and a constant additive term for the background (b) are included. The amplitude factor scales the relative model intensity and the constant background shifts the curve up or down to account for fluorescence coming from out of focus areas, such as the top side of a cell. Occasionally the numerical aperture (N.A.) of the microscope is allowed to be a free parameter in the fitting. The other parameters entering the optical theory, such as different indices of refraction, layer thicknesses and light wavelengths, are assumed constant with some uncertainty. A FLIC chip may be made with oxide terraces of 9 or 16 different heights arranged in blocks. After a fluorescence image is captured, each 9 or 16 terrace block yields a separate FLIC curve that defines a unique . The average is found by compiling all the values into a histogram.
The statistical error in the calculation of comes from two sources: the error in fitting of the optical theory to the data and the uncertainty in the thickness of the oxide layer. Systematic error comes from three sources: the measurement of the oxide thickness (usually by ellipsometer), the fluorescence intensity measurement with the CCD, and the uncertainty in the parameters used in the optical theory. The systematic error has been estimated to be .
In physics, the cross section is a measure of the probability that a specific process will take place when some kind of radiant excitation intersects a localized phenomenon. For example, the Rutherford cross-section is a measure of probability that an alpha particle will be deflected by a given angle during an interaction with an atomic nucleus. Cross section is typically denoted σ (sigma) and is expressed in units of area, more specifically in barns. In a way, it can be thought of as the size of the object that the excitation must hit in order for the process to occur, but more exactly, it is a parameter of a stochastic process.
Diffraction refers to various phenomena that occur when a wave encounters an obstacle or opening. It is defined as the interference or bending of waves around the corners of an obstacle or through an aperture into the region of geometrical shadow of the obstacle/aperture. The diffracting object or aperture effectively becomes a secondary source of the propagating wave. Italian scientist Francesco Maria Grimaldi coined the word diffraction and was the first to record accurate observations of the phenomenon in 1660.
In mathematics and physical science, spherical harmonics are special functions defined on the surface of a sphere. They are often employed in solving partial differential equations in many scientific fields.
In physics and chemistry, Bragg's law, Wulff–Bragg's condition or Laue–Bragg interference, a special case of Laue diffraction, gives the angles for coherent scattering of waves from a crystal lattice. It encompasses the superposition of wave fronts scattered by lattice planes, leading to a strict relation between wavelength and scattering angle, or else to the wavevector transfer with respect to the crystal lattice. Such law had initially been formulated for X-rays upon crystals. However, It applies to all sorts of quantum beams, including neutron and electron waves at atomic distances, as well as visible light at artificial periodic microscale lattices.
In probability theory, the Borel–Kolmogorov paradox is a paradox relating to conditional probability with respect to an event of probability zero. It is named after Émile Borel and Andrey Kolmogorov.
The solar zenith angle is the angle between the sun’s rays and the vertical direction. It is closely related to the solar altitude angle, which is the angle between the sun’s rays and a horizontal plane. Since these two angles are complementary, the cosine of either one of them equals the sine of the other. They can both be calculated with the same formula, using results from spherical trigonometry. At solar noon, the zenith angle is at a minimum and is equal to latitude minus solar declination angle. This is the basis by which ancient mariners navigated the oceans.
The solar azimuth angle is the azimuth angle of the Sun's position. This horizontal coordinate defines the Sun's relative direction along the local horizon, whereas the solar zenith angle defines the Sun's apparent altitude.
In mathematics, the associated Legendre polynomials are the canonical solutions of the general Legendre equation
In cartography, a Tissot's indicatrix is a mathematical contrivance presented by French mathematician Nicolas Auguste Tissot in 1859 and 1871 in order to characterize local distortions due to map projection. It is the geometry that results from projecting a circle of infinitesimal radius from a curved geometric model, such as a globe, onto a map. Tissot proved that the resulting diagram is an ellipse whose axes indicate the two principal directions along which scale is maximal and minimal at that point on the map.
In electromagnetics, directivity is a parameter of an antenna or optical system which measures the degree to which the radiation emitted is concentrated in a single direction. It is the ratio of the radiation intensity in a given direction from the antenna to the radiation intensity averaged over all directions. Therefore, the directivity of a hypothetical isotropic radiator is 1, or 0 dBi.
In astronomy, position angle is the convention for measuring angles on the sky. The International Astronomical Union defines it as the angle measured relative to the north celestial pole (NCP), turning positive into the direction of the right ascension. In the standard (non-flipped) images, this is a counterclockwise measure relative to the axis into the direction of positive declination.
Cylindrical multipole moments are the coefficients in a series expansion of a potential that varies logarithmically with the distance to a source, i.e., as . Such potentials arise in the electric potential of long line charges, and the analogous sources for the magnetic potential and gravitational potential.
Ellipsoidal coordinates are a three-dimensional orthogonal coordinate system that generalizes the two-dimensional elliptic coordinate system. Unlike most three-dimensional orthogonal coordinate systems that feature quadratic coordinate surfaces, the ellipsoidal coordinate system is based on confocal quadrics.
In special functions, a topic in mathematics, spin-weighted spherical harmonics are generalizations of the standard spherical harmonics and—like the usual spherical harmonics—are functions on the sphere. Unlike ordinary spherical harmonics, the spin-weighted harmonics are U(1) gauge fields rather than scalar fields: mathematically, they take values in a complex line bundle. The spin-weighted harmonics are organized by degree l, just like ordinary spherical harmonics, but have an additional spin weights that reflects the additional U(1) symmetry. A special basis of harmonics can be derived from the Laplace spherical harmonics Ylm, and are typically denoted by sYlm, where l and m are the usual parameters familiar from the standard Laplace spherical harmonics. In this special basis, the spin-weighted spherical harmonics appear as actual functions, because the choice of a polar axis fixes the U(1) gauge ambiguity. The spin-weighted spherical harmonics can be obtained from the standard spherical harmonics by application of spin raising and lowering operators. In particular, the spin-weighted spherical harmonics of spin weight s = 0 are simply the standard spherical harmonics:
Diffraction processes affecting waves are amenable to quantitative description and analysis. Such treatments are applied to a wave passing through one or more slits whose width is specified as a proportion of the wavelength. Numerical approximations may be used, including the Fresnel and Fraunhofer approximations.
In physics and mathematics, the solid harmonics are solutions of the Laplace equation in spherical polar coordinates, assumed to be (smooth) functions . There are two kinds: the regular solid harmonics, which are well-defined at the origin and the irregular solid harmonics, which are singular at the origin. Both sets of functions play an important role in potential theory, and are obtained by rescaling spherical harmonics appropriately:
Geographical distance or geodetic distance is the distance measured along the surface of the earth. The formulae in this article calculate distances between points which are defined by geographical coordinates in terms of latitude and longitude. This distance is an element in solving the second (inverse) geodetic problem.
Quantum mechanics was first applied to optics, and interference in particular, by Paul Dirac. Richard Feynman, in his Lectures on Physics, uses Dirac's notation to describe thought experiments on double-slit interference of electrons. Feynman's approach was extended to N-slit interferometers for either single-photon illumination, or narrow-linewidth laser illumination, that is, illumination by indistinguishable photons, by Frank Duarte. The N-slit interferometer was first applied in the generation and measurement of complex interference patterns.
In optics, the Fraunhofer diffraction equation is used to model the diffraction of waves when the diffraction pattern is viewed at a long distance from the diffracting object, and also when it is viewed at the focal plane of an imaging lens.
In physics and engineering, the radiative heat transfer from one surface to another is the equal to the difference of incoming and outgoing radiation from the first surface. In general, the heat transfer between surfaces is governed by temperature, surface emissivity properties and the geometry of the surfaces. The relation for heat transfer can be written as an integral equation with boundary conditions based upon surface conditions. Kernel functions can be useful in approximating and solving this integral equation.