The stretched exponential function
is obtained by inserting a fractional power law into the exponential function. In most applications, it is meaningful only for arguments t between 0 and +∞. With β = 1, the usual exponential function is recovered. With a stretching exponentβ between 0 and 1, the graph of log f versus t is characteristically stretched, hence the name of the function. The compressed exponential function (with β > 1) has less practical importance, with the notable exception of β = 2, which gives the normal distribution.
In statistics, a power law is a functional relationship between two quantities, where a relative change in one quantity results in a proportional relative change in the other quantity, independent of the initial size of those quantities: one quantity varies as a power of another. For instance, considering the area of a square in terms of the length of its side, if the length is doubled, the area is multiplied by a factor of four.
In mathematics, an exponential function is a function of the form
In probability theory, the normaldistribution is a very common continuous probability distribution. Normal distributions are important in statistics and are often used in the natural and social sciences to represent real-valued random variables whose distributions are not known. A random variable with a Gaussian distribution is said to be normally distributed and is called a normal deviate.
In mathematics, the stretched exponential is also known as the complementary cumulative Weibull distribution. The stretched exponential is also the characteristic function, basically the Fourier transform, of the Lévy symmetric alpha-stable distribution.
In probability theory and statistics, the Weibull distribution is a continuous probability distribution. It is named after Swedish mathematician Waloddi Weibull, who described it in detail in 1951, although it was first identified by Fréchet (1927) and first applied by Rosin & Rammler (1933) to describe a particle size distribution.
In probability theory and statistics, the characteristic function of any real-valued random variable completely defines its probability distribution. If a random variable admits a probability density function, then the characteristic function is the Fourier transform of the probability density function. Thus it provides the basis of an alternative route to analytical results compared with working directly with probability density functions or cumulative distribution functions. There are particularly simple results for the characteristic functions of distributions defined by the weighted sums of random variables.
The Fourier transform (FT) decomposes a function of time into the frequencies that make it up, in a way similar to how a musical chord can be expressed as the frequencies of its constituent notes. The Fourier transform of a function of time is itself a complex-valued function of frequency, whose absolute value represents the amount of that frequency present in the original function, and whose complex argument is the phase offset of the basic sinusoid in that frequency. The Fourier transform is called the frequency domain representation of the original signal. The term Fourier transform refers to both the frequency domain representation and the mathematical operation that associates the frequency domain representation to a function of time. The Fourier transform is not limited to functions of time, but in order to have a unified language, the domain of the original function is commonly referred to as the time domain. For many functions of practical interest, one can define an operation that reverses this: the inverse Fourier transformation, also called Fourier synthesis, of a frequency domain representation combines the contributions of all the different frequencies to recover the original function of time. In image processing the notion of a time domain is replaced by that of a spatial domain where the intensity of a signal is identified by its spatial position rather than at any point in time.
In physics, the stretched exponential function is often used as a phenomenological description of relaxation in disordered systems. It was first introduced by Rudolf Kohlrausch in 1854 to describe the discharge of a capacitor;therefore it is also called the Kohlrausch function. In 1970, G. Williams and D.C. Watts used the Fourier transform of the stretched exponential to describe dielectric spectra of polymers; in this context, the stretched exponential or its Fourier transform are also called the Kohlrausch-Williams-Watts (KWW) function.
In the physical sciences, relaxation usually means the return of a perturbed system into equilibrium. Each relaxation process can be categorized by a relaxation time τ. The simplest theoretical description of relaxation as function of time t is an exponential law exp(-t/τ).
Rudolf Hermann Arndt Kohlrausch was a German physicist.
Dielectric spectroscopy measures the dielectric properties of a medium as a function of frequency. It is based on the interaction of an external field with the electric dipole moment of the sample, often expressed by permittivity.
In phenomenological applications, it is often not clear whether the stretched exponential function should apply to the differential or to the integral distribution function—or to neither. In each case one gets the same asymptotic decay, but a different power law prefactor, which makes fits more ambiguous than for simple exponentials. In a few casesit can be shown that the asymptotic decay is a stretched exponential, but the prefactor is usually an unrelated power.
Following the usual physical interpretation, we interpret the function argument t as a time, and fβ(t) is the differential distribution. The area under the curve is therefore interpreted as a mean relaxation time. One finds
where Γ is the gamma function. For exponential decay, 〈τ〉 = τK is recovered.
In mathematics, the gamma function is one of a number of extensions of the factorial function with its argument shifted down by 1, to real and complex numbers. Derived by Daniel Bernoulli, if n is a positive integer,
The higher moments of the stretched exponential function are:
In mathematics, a moment is a specific quantitative measure of the shape of a function. It is used in both mechanics and statistics. If the function represents physical density, then the zeroth moment is the total mass, the first moment divided by the total mass is the center of mass, and the second moment is the rotational inertia. If the function is a probability distribution, then the zeroth moment is the total probability, the first moment is the mean, the second central moment is the variance, the third standardized moment is the skewness, and the fourth standardized moment is the kurtosis. The mathematical concept is closely related to the concept of moment in physics.
In physics, attempts have been made to explain stretched exponential behaviour as a linear superposition of simple exponential decays. This requires a nontrivial distribution of relaxation times, ρ(u), which is implicitly defined by
Alternatively, a distribution
ρ can be computed from the series expansion:
For rational values of β, ρ(u) can be calculated in terms of elementary functions. But the expression is in general too complex to be useful except for the case β = 1/2 where
Figure 2 shows the same results plotted in both a linear and a log representation. The curves converge to a Dirac delta function peaked at u = 1 as β approaches 1, corresponding to the simple exponential function.
|Figure 2. Linear and log-log plots of the stretched exponential distribution function vs |
for values of the stretching parameter β between 0.1 and 0.9.
The moments of the original function can be expressed as
The first logarithmic moment of the distribution of simple-exponential relaxation times is
where Eu is the Euler constant.
To describe results from spectroscopy or inelastic scattering, the sine or cosine Fourier transform of the stretched exponential is needed. It must be calculated either by numeric integration, or from a series expansion.The series here as well as the one for the distribution function are special cases of the Fox-Wright function. For practical purposes, the Fourier transform may be approximated by the Havriliak-Negami function, though nowadays the numeric computation can be done so efficiently that there is no longer any reason not to use the Kohlrausch-Williams-Watts function in the frequency domain.
As said in the introduction, the stretched exponential was introduced by the German physicist Rudolf Kohlrausch in 1854 to describe the discharge of a capacitor (Leyden jar) that used glass as dielectric medium. The next documented usage is by Friedrich Kohlrausch, son of Rudolf, to describe torsional relaxation. A. Werner used it in 1907 to describe complex luminescence decays; Theodor Förster in 1949 as the fluorescence decay law of electronic energy donors.
Outside condensed matter physics, the stretched exponential has been used to describe the removal rates of small, stray bodies in the solar system,the diffusion-weighted MRI signal in the brain, and the production from unconventional gas wells.
If the integrated distribution is a stretched exponential, the normalized probability density function is given by
Note that confusingly some authorshave been known to use the name "stretched exponential" to refer to the Weibull distribution.
A modified stretched exponential function
with a slowly t-dependent exponent β has been used for biological survival curves.
In mathematics convolution is a mathematical operation on two functions to produce a third function that expresses how the shape of one is modified by the other. The term convolution refers to both the result function and to the process of computing it. Convolution is similar to cross-correlation. For real-valued functions, of a continuous or discrete variable, it differs from cross-correlation only in that either f (x) or g(x) is reflected about the y-axis; thus it is a cross-correlation of f (x) and g(−x), or f (−x) and g(x). For continuous functions, the cross-correlation operator is the adjoint of the convolution operator.
Fractional calculus is a branch of mathematical analysis that studies the several different possibilities of defining real number powers or complex number powers of the differentiation operator D
Linear time-invariant theory, commonly known as LTI system theory, comes from applied mathematics and has direct applications in NMR spectroscopy, seismology, circuits, signal processing, control theory, and other technical areas. It investigates the response of a linear and time-invariant system to an arbitrary input signal. Trajectories of these systems are commonly measured and tracked as they move through time, but in applications like image processing and field theory, the LTI systems also have trajectories in spatial dimensions. Thus, these systems are also called linear translation-invariant to give the theory the most general reach. In the case of generic discrete-time systems, linear shift-invariant is the corresponding term. A good example of LTI systems are electrical circuits that can be made up of resistors, capacitors, and inductors.
In probability theory and statistics, the inverse gamma distribution is a two-parameter family of continuous probability distributions on the positive real line, which is the distribution of the reciprocal of a variable distributed according to the gamma distribution. Perhaps the chief use of the inverse gamma distribution is in Bayesian statistics, where the distribution arises as the marginal posterior distribution for the unknown variance of a normal distribution, if an uninformative prior is used, and as an analytically tractable conjugate prior, if an informative prior is required.
The Havriliak–Negami relaxation is an empirical modification of the Debye relaxation model in electromagnetism. Unlike the Debye model, the Havriliak–Negami relaxation accounts for the asymmetry and broadness of the dielectric dispersion curve. The model was first used to describe the dielectric relaxation of some polymers, by adding two exponential parameters to the Debye equation:
In the statistical mechanics of quantum mechanical systems and quantum field theory, the properties of a system in thermal equilibrium can be described by a mathematical object called a Kubo–Martin–Schwinger state or, more commonly, a KMS state: a state satisfying the KMS condition. Kubo (1957) introduced the condition, Martin & Schwinger (1959) used it to define thermodynamic Greens functions, and Rudolf Haag, M. Winnink, and N. M. Hugenholtz (1967) used the condition to define equilibrium states and called it the KMS condition.
The Mason–Weaver equation describes the sedimentation and diffusion of solutes under a uniform force, usually a gravitational field. Assuming that the gravitational field is aligned in the z direction, the Mason–Weaver equation may be written
Dynamic light scattering (DLS) is a technique in physics that can be used to determine the size distribution profile of small particles in suspension or polymers in solution. In the scope of DLS, temporal fluctuations are usually analyzed by means of the intensity or photon auto-correlation function. In the time domain analysis, the autocorrelation function (ACF) usually decays starting from zero delay time, and faster dynamics due to smaller particles lead to faster decorrelation of scattered intensity trace. It has been shown that the intensity ACF is the Fourier transformation of the power spectrum, and therefore the DLS measurements can be equally well performed in the spectral domain. DLS can also be used to probe the behavior of complex fluids such as concentrated polymer solutions.
The Newman–Penrose (NP) formalism is a set of notation developed by Ezra T. Newman and Roger Penrose for general relativity (GR). Their notation is an effort to treat general relativity in terms of spinor notation, which introduces complex forms of the usual variables used in GR. The NP formalism is itself a special case of the tetrad formalism, where the tensors of the theory are projected onto a complete vector basis at each point in spacetime. Usually this vector basis is chosen to reflect some symmetry of the space-time, leading to simplified expressions for physical observables. In the case of the NP formalism, the vector basis chosen is a null tetrad: a set of four null vectors—two real, and a complex-conjugate pair. The two real members asymptotically point radially inward and radially outward, and the formalism is well adapted to treatment of the propagation of radiation in curved spacetime. The most often-used variables in the formalism are the Weyl scalars, derived from the Weyl tensor. In particular, it can be shown that one of these scalars-- in the appropriate frame—encodes the outgoing gravitational radiation of an asymptotically flat system.
In the Newman–Penrose (NP) formalism of general relativity, Weyl scalars refer to a set of five complex scalars which encode the ten independent components of the Weyl tensors of a four-dimensional spacetime.
Continuous wavelets of compact support can be built , which are related to the beta distribution. The process is derived from probability distributions using blur derivative. These new wavelets have just one cycle, so they are termed unicycle wavelets. They can be viewed as a soft variety of Haar wavelets whose shape is fine-tuned by two parameters and . Closed-form expressions for beta wavelets and scale functions as well as their spectra are derived. Their importance is due to the Central Limit Theorem by Gnedenko and Kolmogorov applied for compactly supported signals .
In many-body theory, the term Green's function is sometimes used interchangeably with correlation function, but refers specifically to correlators of field operators or creation and annihilation operators.
Resonance fluorescence is the process in which a two-level atom system interacts with the quantum electromagnetic field if the field is driven at a frequency near to the natural frequency of the atom.
Quantile regression is a type of regression analysis used in statistics and econometrics. Whereas the method of least squares results in estimates of the conditional mean of the response variable given certain values of the predictor variables, quantile regression aims at estimating either the conditional median or other quantiles of the response variable. Essentially, quantile regression is the extension of linear regression and we use it when the conditions of linear regression are not applicable.
Bilinear time–frequency distributions, or quadratic time–frequency distributions, arise in a sub-field of signal analysis and signal processing called time–frequency signal processing, and, in the statistical analysis of time series data. Such methods are used where one needs to deal with a situation where the frequency composition of a signal may be changing over time; this sub-field used to be called time–frequency signal analysis, and is now more often called time–frequency signal processing due to the progress in using these methods to a wide range of signal-processing problems.
Nuclear magnetic resonance (NMR) in porous materials covers the application of using NMR as a tool to study the structure of porous media and various processes occurring in them. This technique allows the determination of characteristics such as the porosity and pore size distribution, the permeability, the water saturation, the wettability, etc.
Phonons can scatter through several mechanisms as they travel through the material. These scattering mechanisms are: Umklapp phonon-phonon scattering, phonon-impurity scattering, phonon-electron scattering, and phonon-boundary scattering. Each scattering mechanism can be characterised by a relaxation rate 1/ which is the inverse of the corresponding relaxation time.
In probability theory, a fractional Poisson process is a stochastic process to model the long-memory dynamics of a stream of counts. The time interval between each pair of consecutive counts follows the non-exponential power-law distribution with parameter , which has physical dimension , where . In other words, fractional Poisson process is non-Markov counting stochastic process which exhibits non-exponential distribution of interarrival times. The fractional Poisson process is a continuous-time process which can be thought of as natural generalization of the well-known Poisson process. Fractional Poisson probability distribution is a new member of discrete probability distributions.
Martin Hairer's theory of regularity structures provides a framework for studying a large class of subcritical parabolic stochastic partial differential equations arising from quantum field theory. The framework covers the Kardar–Parisi–Zhang equation, the equation and the parabolic Anderson model, all of which require renormalization in order to have a well-defined notion of solution.
In premixed turbulent combustion, Bray–Moss–Libby model is a closure model for a scalar field, built on the assumption that the reaction sheet is infinitely thin compared with the turbulent scales, so that the scalar can be found either at the state of burnt gas or unburnt gas. The model is named after Kenneth Bray, J. B. Moss and Paul A. Libby.