Differential optical absorption spectroscopy

Last updated
Long-path DOAS System at the Cape Verde Atmospheric Observatory (CVAO) at Sao Vicente, Cape Verde LP DOAS on Cape Verde Atmospheric Observatory, Sao Vicente , Cape Verde.JPG
Long-path DOAS System at the Cape Verde Atmospheric Observatory (CVAO) at São Vicente, Cape Verde

In atmospheric chemistry, differential optical absorption spectroscopy (DOAS) is used to measure concentrations of trace gases. When combined with basic optical spectrometers such as prisms or diffraction gratings and automated, ground-based observation platforms, it presents a cheap and powerful means for the measurement of trace gas species such as ozone and nitrogen dioxide. Typical setups allow for detection limits corresponding to optical depths of 0.0001 along lightpaths of up to typically 15 km and thus allow for the detection also of weak absorbers, such as water vapour, nitrous acid, formaldehyde, tetraoxygen, iodine oxide, bromine oxide and chlorine oxide.

Contents

Theory

DOAS instruments are often divided into two main groups: passive and active ones. The active DOAS system such as longpath(LP)-systems and cavity-enhanced(CE) DOAS systems have their own light-source, whereas passive ones use the sun as their light source, e.g. MAX(Multi-axial)-DOAS. Also the moon can be used for night-time DOAS measurements, but here usually direct light measurements need to be done instead of scattered light measurements as it is the case for passive DOAS systems such as the MAX-DOAS.

The change in intensity of a beam of radiation as it travels through a medium that is not emitting is given by Beers law:

where I is the intensity of the radiation, is the density of substance, is the absorption and scattering cross section and s is the path. The subscript i denotes different species, assuming that the medium is composed of multiple substances. Several simplifications can be made. The first is to pull the absorption cross section out of the integral by assuming that it does not change significantly with the path—i.e. that it is a constant. Since the DOAS method is used to measure total column density, and not density per se, the second is to take the integral as a single parameter which we call column density:

The new, considerably simplified equation now looks like this:

If that was all there was to it, given any spectrum with sufficient resolution and spectral features, all the species could be solved for by simple algebraic inversion. Active DOAS variants can use the spectrum of the lightsource itself as reference. Unfortunately for passive measurements, where we are measuring from the bottom of the atmosphere and not the top, there is no way to determine the initial intensity, I0. Rather, what is done is to take the ratio of two measurements with different paths through the atmosphere and so determine the difference in optical depth between the two columns (Alternative a solar atlas can be employed, but this introduces another important error source to the fitting process, the instrument function itself. If the reference spectrum itself is also recorded with the same setup, these effects will eventually cancel out):

A significant component of a measured spectrum is often given by scattering and continuum components that have a smooth variation with respect to wavelength. Since these don't supply much information, the spectrum can be divided into two parts:

where is the continuum component of the spectrum and is that which remains and we shall call the differential cross-section. Therefore:

where we call the differential optical depth (DOD). Removing the continuum components and adding in the wavelength dependence produces a matrix equation with which to do the inversion:

What this means is that before performing the inversion, the continuum components from both the optical depth and from the species cross sections must be removed. This is the important “trick” of the DOAS method. In practice, this is done by simply fitting a polynomial to the spectrum and then subtracting it. Obviously, this will not produce an exact equality between the measured optical depths and those calculated with the differential cross-sections but the difference is usually small. Alternatively a common method which is applied to remove broad-band structures from the optical density are binomial high-pass filters.

Also, unless the path difference between the two measurements can be strictly determined and has some physical meaning (such as the distance of telescope and retro-reflector for a longpath-DOAS system), the retrieved quantities, will be meaningless. The typical measurement geometry will be as follows: the instrument is always pointing straight up. Measurements are taken at two different times of day: once with the sun high in the sky, and once with it near the horizon. In both cases the light is scattered into the instrument before passing through the troposphere but takes different paths through the stratosphere as shown in the figure.

To deal with this, we introduce a quantity called the airmass factor which gives the ratio between the vertical column density (the observation is performed looking straight up, with the sun at full zenith) and the slant column density (same observation angle, sun at some other angle):

where amfi is the airmass factor of species i, is the vertical column and is the slant column with the sun at zenith angle . Airmass factors can be determined by radiative transfer calculations.

Some algebra shows the vertical column density to be given by:

where is the angle at the first measurement geometry and is the angle at the second. Note that with this method, the column along the common path will be subtracted from our measurements and cannot be recovered. What this means is that, only the column density in the stratosphere can be retrieved and the lowest point of scatter between the two measurements must be determined to figure out where the column begins.

Related Research Articles

<span class="mw-page-title-main">Pauli matrices</span> Matrices important in quantum mechanics and the study of spin

In mathematical physics and mathematics, the Pauli matrices are a set of three 2 × 2 complex matrices that are traceless, Hermitian, involutory and unitary. Usually indicated by the Greek letter sigma, they are occasionally denoted by tau when used in connection with isospin symmetries.

The propagation constant of a sinusoidal electromagnetic wave is a measure of the change undergone by the amplitude and phase of the wave as it propagates in a given direction. The quantity being measured can be the voltage, the current in a circuit, or a field vector such as electric field strength or flux density. The propagation constant itself measures the dimensionless change in magnitude or phase per unit length. In the context of two-port networks and their cascades, propagation constant measures the change undergone by the source quantity as it propagates from one port to the next.

<span class="mw-page-title-main">Bremsstrahlung</span> Electromagnetic radiation due to deceleration of charged particles

In particle physics, bremsstrahlung is electromagnetic radiation produced by the deceleration of a charged particle when deflected by another charged particle, typically an electron by an atomic nucleus. The moving particle loses kinetic energy, which is converted into radiation, thus satisfying the law of conservation of energy. The term is also used to refer to the process of producing the radiation. Bremsstrahlung has a continuous spectrum, which becomes more intense and whose peak intensity shifts toward higher frequencies as the change of the energy of the decelerated particles increases.

In statistics, sufficiency is a property of a statistic computed on a sample dataset in relation to a parametric model of the dataset. A sufficient statistic contains all of the information that the dataset provides about the model parameters. It is closely related to the concepts of an ancillary statistic which contains no information about the model parameters, and of a complete statistic which only contains information about the parameters and no ancillary information.

In mathematics, particularly in linear algebra, tensor analysis, and differential geometry, the Levi-Civita symbol or Levi-Civita epsilon represents a collection of numbers defined from the sign of a permutation of the natural numbers 1, 2, ..., n, for some positive integer n. It is named after the Italian mathematician and physicist Tullio Levi-Civita. Other names include the permutation symbol, antisymmetric symbol, or alternating symbol, which refer to its antisymmetric property and definition in terms of permutations.

<span class="mw-page-title-main">Gamma distribution</span> Probability distribution

In probability theory and statistics, the gamma distribution is a versatile two-parameter family of continuous probability distributions. The exponential distribution, Erlang distribution, and chi-squared distribution are special cases of the gamma distribution. There are two equivalent parameterizations in common use:

  1. With a shape parameter k and a scale parameter θ
  2. With a shape parameter and a rate parameter

In mathematics, a Gaussian function, often simply referred to as a Gaussian, is a function of the base form and with parametric extension for arbitrary real constants a, b and non-zero c. It is named after the mathematician Carl Friedrich Gauss. The graph of a Gaussian is a characteristic symmetric "bell curve" shape. The parameter a is the height of the curve's peak, b is the position of the center of the peak, and c controls the width of the "bell".

In information geometry, the Fisher information metric is a particular Riemannian metric which can be defined on a smooth statistical manifold, i.e., a smooth manifold whose points are probability measures defined on a common probability space. It can be used to calculate the informational difference between measurements.

In statistics, the Neyman–Pearson lemma describes the existence and uniqueness of the likelihood ratio as a uniformly most powerful test in certain contexts. It was introduced by Jerzy Neyman and Egon Pearson in a paper in 1933. The Neyman–Pearson lemma is part of the Neyman–Pearson theory of statistical testing, which introduced concepts like errors of the second kind, power function, and inductive behavior. The previous Fisherian theory of significance testing postulated only one hypothesis. By introducing a competing hypothesis, the Neyman–Pearsonian flavor of statistical testing allows investigating the two types of errors. The trivial cases where one always rejects or accepts the null hypothesis are of little interest but it does prove that one must not relinquish control over one type of error while calibrating the other. Neyman and Pearson accordingly proceeded to restrict their attention to the class of all level tests while subsequently minimizing type II error, traditionally denoted by . Their seminal paper of 1933, including the Neyman–Pearson lemma, comes at the end of this endeavor, not only showing the existence of tests with the most power that retain a prespecified level of type I error, but also providing a way to construct such tests. The Karlin-Rubin theorem extends the Neyman–Pearson lemma to settings involving composite hypotheses with monotone likelihood ratios.

The classical XY model is a lattice model of statistical mechanics. In general, the XY model can be seen as a specialization of Stanley's n-vector model for n = 2.

<span class="mw-page-title-main">Ornstein–Uhlenbeck process</span> Stochastic process modeling random walk with friction

In mathematics, the Ornstein–Uhlenbeck process is a stochastic process with applications in financial mathematics and the physical sciences. Its original application in physics was as a model for the velocity of a massive Brownian particle under the influence of friction. It is named after Leonard Ornstein and George Eugene Uhlenbeck.

A ratio distribution is a probability distribution constructed as the distribution of the ratio of random variables having two other known distributions. Given two random variables X and Y, the distribution of the random variable Z that is formed as the ratio Z = X/Y is a ratio distribution.

In credibility theory, a branch of study in actuarial science, the Bühlmann model is a random effects model used to determine the appropriate premium for a group of insurance contracts. The model is named after Hans Bühlmann who first published a description in 1967.

<span class="mw-page-title-main">Shifted log-logistic distribution</span>

The shifted log-logistic distribution is a probability distribution also known as the generalized log-logistic or the three-parameter log-logistic distribution. It has also been called the generalized logistic distribution, but this conflicts with other uses of the term: see generalized logistic distribution.

The Scherrer equation, in X-ray diffraction and crystallography, is a formula that relates the size of sub-micrometre crystallites in a solid to the broadening of a peak in a diffraction pattern. It is often referred to, incorrectly, as a formula for particle size measurement or analysis. It is named after Paul Scherrer. It is used in the determination of size of crystals in the form of powder.

<span class="mw-page-title-main">Gravitational lensing formalism</span>

In general relativity, a point mass deflects a light ray with impact parameter by an angle approximately equal to

Experimental uncertainty analysis is a technique that analyses a derived quantity, based on the uncertainties in the experimentally measured quantities that are used in some form of mathematical relationship ("model") to calculate that derived quantity. The model used to convert the measurements into the derived quantity is usually based on fundamental principles of a science or engineering discipline.

<span class="mw-page-title-main">Multivariate stable distribution</span>

The multivariate stable distribution is a multivariate probability distribution that is a multivariate generalisation of the univariate stable distribution. The multivariate stable distribution defines linear relations between stable distribution marginals. In the same way as for the univariate case, the distribution is defined in terms of its characteristic function.

In probability and statistics, the generalized beta distribution is a continuous probability distribution with four shape parameters, including more than thirty named distributions as limiting or special cases. A fifth parameter for scaling is sometimes included, while a sixth parameter for location is customarily left implicit and excluded from the characterization. The distribution has been used in the modeling of income distribution, stock returns, as well as in regression analysis. The exponential generalized beta (EGB) distribution follows directly from the GB and generalizes other common distributions.

In statistics, the variance function is a smooth function that depicts the variance of a random quantity as a function of its mean. The variance function is a measure of heteroscedasticity and plays a large role in many settings of statistical modelling. It is a main ingredient in the generalized linear model framework and a tool used in non-parametric regression, semiparametric regression and functional data analysis. In parametric modeling, variance functions take on a parametric form and explicitly describe the relationship between the variance and the mean of a random quantity. In a non-parametric setting, the variance function is assumed to be a smooth function.

References