Dead time

Last updated

For detection systems that record discrete events, such as particle and nuclear detectors, the dead time is the time after each event during which the system is not able to record another event. [1] An everyday life example of this is what happens when someone takes a photo using a flash - another picture cannot be taken immediately afterward because the flash needs a few seconds to recharge. In addition to lowering the detection efficiency, dead times can have other effects, such as creating possible exploits in quantum cryptography. [2]

Contents

Overview

The total dead time of a detection system is usually due to the contributions of the intrinsic dead time of the detector (for example the ion drift time in a gaseous ionization detector), of the analog front end (for example the shaping time of a spectroscopy amplifier) and of the data acquisition (the conversion time of the analog-to-digital converters and the readout and storage times).

The intrinsic dead time of a detector is often due to its physical characteristics; for example a spark chamber is "dead" until the potential between the plates recovers above a high enough value. In other cases the detector, after a first event, is still "live" and does produce a signal for the successive event, but the signal is such that the detector readout is unable to discriminate and separate them, resulting in an event loss or in a so-called "pile-up" event where, for example, a (possibly partial) sum of the deposited energies from the two events is recorded instead. In some cases this can be minimised by an appropriate design, but often only at the expense of other properties like energy resolution.

The analog electronics can also introduce dead time; in particular a shaping spectroscopy amplifier needs to integrate a fast rise, slow fall signal over the longest possible time (usually from .5 up to 10 microseconds) to attain the best possible resolution, such that the user needs to choose a compromise between event rate and resolution.

Trigger logic is another possible source of dead time; beyond the proper time of the signal processing, spurious triggers caused by noise need to be taken into account.

Finally, digitisation, readout and storage of the event, especially in detection systems with large number of channels like those used in modern High Energy Physics experiments, also contribute to the total dead time. To alleviate the issue, medium and large experiments use sophisticated pipelining and multi-level trigger logic to reduce the readout rates. [3]

From the total time a detection system is running, the dead time must be subtracted to obtain the live time.

Paralyzable and non-paralyzable behaviour

A detector, or detection system, can be characterized by a paralyzable or non-paralyzable behaviour. [1] In a non-paralyzable detector, an event happening during the dead time is simply lost, so that with an increasing event rate the detector will reach a saturation rate equal to the inverse of the dead time. In a paralyzable detector, an event happening during the dead time will not just be missed, but will restart the dead time, so that with increasing rate the detector will reach a saturation point where it will be incapable of recording any event at all. A semi-paralyzable detector exhibits an intermediate behaviour, in which the event arriving during dead time does extend it, but not by the full amount, resulting in a detection rate that decreases when the event rate approaches saturation.

Analysis

It will be assumed that the events are occurring randomly with an average frequency of f. That is, they constitute a Poisson process. The probability that an event will occur in an infinitesimal time interval dt is then f dt. It follows that the probability P(t) that an event will occur at time t  to t+dt with no events occurring between t=0 and time t  is given by the exponential distribution (Lucke 1974, Meeks 2008):

The expected time between events is then

Non-paralyzable analysis

For the non-paralyzable case, with a dead time of , the probability of measuring an event between and is zero. Otherwise the probabilities of measurement are the same as the event probabilities. The probability of measuring an event at time t with no intervening measurements is then given by an exponential distribution shifted by :

for
for

The expected time between measurements is then

In other words, if counts are recorded during a particular time interval and the dead time is known, the actual number of events (N) may be estimated by [4]

If the dead time is not known, a statistical analysis can yield the correct count. For example, (Meeks 2008), if are a set of intervals between measurements, then the will have a shifted exponential distribution, but if a fixed value D is subtracted from each interval, with negative values discarded, the distribution will be exponential as long as D is greater than the dead time . For an exponential distribution, the following relationship holds:

where n is any integer. If the above function is estimated for many measured intervals with various values of D subtracted (and for various values of n) it should be found that for values of D above a certain threshold, the above equation will be nearly true, and the count rate derived from these modified intervals will be equal to the true count rate.

Time-To-Count

With a modern microprocessor based ratemeter one technique for measuring field strength with detectors (e.g., Geiger–Müller tubes) with a recovery time is Time-To-Count. In this technique, the detector is armed at the same time a counter is started. When a strike occurs, the counter is stopped. If this happens many times in a certain time period (e.g., two seconds), then the mean time between strikes can be determined, and thus the count rate. Live time, dead time, and total time are thus measured, not estimated. This technique is used quite widely in radiation monitoring systems used in nuclear power generating stations.

See also

Related Research Articles

In statistical mechanics, the virial theorem provides a general equation that relates the average over time of the total kinetic energy of a stable system of discrete particles, bound by a conservative force, with that of the total potential energy of the system. Mathematically, the theorem states

<span class="mw-page-title-main">Allan variance</span> Measure of frequency stability in clocks and oscillators

The Allan variance (AVAR), also known as two-sample variance, is a measure of frequency stability in clocks, oscillators and amplifiers. It is named after David W. Allan and expressed mathematically as . The Allan deviation (ADEV), also known as sigma-tau, is the square root of the Allan variance, .

In physics, a Langevin equation is a stochastic differential equation describing how a system evolves when subjected to a combination of deterministic and fluctuating ("random") forces. The dependent variables in a Langevin equation typically are collective (macroscopic) variables changing only slowly in comparison to the other (microscopic) variables of the system. The fast (microscopic) variables are responsible for the stochastic nature of the Langevin equation. One application is to Brownian motion, which models the fluctuating motion of a small particle in a fluid.

In quantum mechanics, perturbation theory is a set of approximation schemes directly related to mathematical perturbation for describing a complicated quantum system in terms of a simpler one. The idea is to start with a simple system for which a mathematical solution is known, and add an additional "perturbing" Hamiltonian representing a weak disturbance to the system. If the disturbance is not too large, the various physical quantities associated with the perturbed system can be expressed as "corrections" to those of the simple system. These corrections, being small compared to the size of the quantities themselves, can be calculated using approximate methods such as asymptotic series. The complicated system can therefore be studied based on knowledge of the simpler one. In effect, it is describing a complicated unsolved system using a simple, solvable system.

<span class="mw-page-title-main">Exponential decay</span> Decrease in value at a rate proportional to the current value

A quantity is subject to exponential decay if it decreases at a rate proportional to its current value. Symbolically, this process can be expressed by the following differential equation, where N is the quantity and λ (lambda) is a positive rate called the exponential decay constant, disintegration constant, rate constant, or transformation constant:

<span class="mw-page-title-main">Drude model</span> Model of electrical conduction

The Drude model of electrical conduction was proposed in 1900 by Paul Drude to explain the transport properties of electrons in materials. Basically, Ohm's law was well established and stated that the current J and voltage V driving the current are related to the resistance R of the material. The inverse of the resistance is known as the conductance. When we consider a metal of unit length and unit cross sectional area, the conductance is known as the conductivity, which is the inverse of resistivity. The Drude model attempts to explain the resistivity of a conductor in terms of the scattering of electrons by the relatively immobile ions in the metal that act like obstructions to the flow of electrons.

The adiabatic theorem is a concept in quantum mechanics. Its original form, due to Max Born and Vladimir Fock (1928), was stated as follows:

<span class="mw-page-title-main">Lawson criterion</span> Criterion for igniting a nuclear fusion chain reaction

The Lawson criterion is a figure of merit used in nuclear fusion research. It compares the rate of energy being generated by fusion reactions within the fusion fuel to the rate of energy losses to the environment. When the rate of production is higher than the rate of loss, the system will produce net energy. If enough of that energy is captured by the fuel, the system will become self-sustaining and is said to be ignited.

In physics, the Hanbury Brown and Twiss (HBT) effect is any of a variety of correlation and anti-correlation effects in the intensities received by two detectors from a beam of particles. HBT effects can generally be attributed to the wave–particle duality of the beam, and the results of a given experiment depend on whether the beam is composed of fermions or bosons. Devices which use the effect are commonly called intensity interferometers and were originally used in astronomy, although they are also heavily used in the field of quantum optics.

Fluorescence correlation spectroscopy (FCS) is a statistical analysis, via time correlation, of stationary fluctuations of the fluorescence intensity. Its theoretical underpinning originated from L. Onsager's regression hypothesis. The analysis provides kinetic parameters of the physical processes underlying the fluctuations. One of the interesting applications of this is an analysis of the concentration fluctuations of fluorescent particles (molecules) in solution. In this application, the fluorescence emitted from a very tiny space in solution containing a small number of fluorescent particles (molecules) is observed. The fluorescence intensity is fluctuating due to Brownian motion of the particles. In other words, the number of the particles in the sub-space defined by the optical system is randomly changing around the average number. The analysis gives the average number of fluorescent particles and average diffusion time, when the particle is passing through the space. Eventually, both the concentration and size of the particle (molecule) are determined. Both parameters are important in biochemical research, biophysics, and chemistry.

Radiation trapping, imprisonment of resonance radiation, radiative transfer of spectral lines, line transfer or radiation diffusion is a phenomenon in physics whereby radiation may be "trapped" in a system as it is emitted by one atom and absorbed by another.

<span class="mw-page-title-main">Stretched exponential function</span>

The stretched exponential function

The theoretical and experimental justification for the Schrödinger equation motivates the discovery of the Schrödinger equation, the equation that describes the dynamics of nonrelativistic particles. The motivation uses photons, which are relativistic particles with dynamics described by Maxwell's equations, as an analogue for all types of particles.

<span class="mw-page-title-main">Rotational diffusion</span>

Rotational diffusion is the rotational movement which acts upon any object such as particles, molecules, atoms when present in a fluid, by random changes in their orientations. Whilst the directions and intensities of these changes are statistically random, they do not arise randomly and are instead the result of interactions between particles. One example occurs in colloids, where relatively large insoluble particles are suspended in a greater amount of fluid. The changes in orientation occur from collisions between the particle and the many molecules forming the fluid surrounding the particle, which each transfer kinetic energy to the particle, and as such can be considered random due to the varied speeds and amounts of fluid molecules incident on each individual particle at any given time.

Resonance fluorescence is the process in which a two-level atom system interacts with the quantum electromagnetic field if the field is driven at a frequency near to the natural frequency of the atom.

Fluorescence cross-correlation spectroscopy (FCCS) is a spectroscopic technique that examines the interactions of fluorescent particles of different colours as they randomly diffuse through a microscopic detection volume over time, under steady conditions.

Photon antibunching generally refers to a light field with photons more equally spaced than a coherent laser field, a signature being signals at appropriate detectors which are anticorrelated. More specifically, it can refer to sub-Poissonian photon statistics, that is a photon number distribution for which the variance is less than the mean. A coherent state, as output by a laser far above threshold, has Poissonian statistics yielding random photon spacing; while a thermal light field has super-Poissonian statistics and yields bunched photon spacing. In the thermal (bunched) case, the number of fluctuations is larger than a coherent state; for an antibunched source they are smaller.

<span class="mw-page-title-main">Wrapped exponential distribution</span> Probability distribution

In probability theory and directional statistics, a wrapped exponential distribution is a wrapped probability distribution that results from the "wrapping" of the exponential distribution around the unit circle.

In quantum mechanics, magnetic resonance is a resonant effect that can appear when a magnetic dipole is exposed to a static magnetic field and perturbed with another, oscillating electromagnetic field. Due to the static field, the dipole can assume a number of discrete energy eigenstates, depending on the value of its angular momentum (azimuthal) quantum number. The oscillating field can then make the dipole transit between its energy states with a certain probability and at a certain rate. The overall transition probability will depend on the field's frequency and the rate will depend on its amplitude. When the frequency of that field leads to the maximum possible transition probability between two states, a magnetic resonance has been achieved. In that case, the energy of the photons composing the oscillating field matches the energy difference between said states. If the dipole is tickled with a field oscillating far from resonance, it is unlikely to transition. That is analogous to other resonant effects, such as with the forced harmonic oscillator. The periodic transition between the different states is called Rabi cycle and the rate at which that happens is called Rabi frequency. The Rabi frequency should not be confused with the field's own frequency. Since many atomic nuclei species can behave as a magnetic dipole, this resonance technique is the basis of nuclear magnetic resonance, including nuclear magnetic resonance imaging and nuclear magnetic resonance spectroscopy.

<span class="mw-page-title-main">Perturbed angular correlation</span>

The perturbed γ-γ angular correlation, PAC for short or PAC-Spectroscopy, is a method of nuclear solid-state physics with which magnetic and electric fields in crystal structures can be measured. In doing so, electrical field gradients and the Larmor frequency in magnetic fields as well as dynamic effects are determined. With this very sensitive method, which requires only about 10–1000 billion atoms of a radioactive isotope per measurement, material properties in the local structure, phase transitions, magnetism and diffusion can be investigated. The PAC method is related to nuclear magnetic resonance and the Mössbauer effect, but shows no signal attenuation at very high temperatures. Today only the time-differential perturbed angular correlation (TDPAC) is used.

References

  1. 1 2 W. R. Leo (1994). Techniques for Nuclear and Particle Physics Experiments. Springer. pp. 122–127. ISBN   3-540-57280-5.
  2. Weier, H.; et al. (2011). "Quantum eavesdropping without interception: an attack exploiting the dead time of single-photon detectors". New Journal of Physics. 13 (7): 073024. arXiv: 1101.5289 . Bibcode:2011NJPh...13g3024W. doi:10.1088/1367-2630/13/7/073024.
  3. Carena, F.; et al. (December 2010). ALICE DAQ and ECS Manual (PDF) (ALICE Internal Note/DAQ ALICE-INT-2010-001).
  4. Patil, Amol (2010). "Dead time and count loss determination for radiation detection systems in high count rate applications". Doctoral Dissertations.: 2148.

Further reading

Morris, S.L. and Naftilan, S.A., "Determining Photometric Dead Time by Using Hydrogen Filters", Astron. Astrophys. Suppl. Ser. 107, 71-75, Oct. 1994