The raindrop size distribution (DSD), or granulometry of rain, is the distribution of the number of raindrops according to their diameter (D). Three processes account for the formation of drops: water vapor condensation, accumulation of small drops on large drops and collisions between sizes. According to the time spent in the cloud, the vertical movement in it and the ambient temperature, drops have a very varied history and a distribution of diameters from a few micrometers to a few millimeters.
In general, the drop size distribution is represented as a truncated gamma function for diameter zero to the maximum possible size of rain droplets. [2] [3] The number of drop with diameter is therefore :
with , and as constants.
The most well-known study about raindrop size distribution is from Marshall and Palmer done at McGill University in Montréal in 1948. [4] They used stratiform rain with and concluded to an exponential drop size distribution. This Marshall-Palmer distribution is expressed as:
Where
As the different precipitations (rain, snow, sleet, etc...), and the different types of clouds that produce them vary in time and space, the coefficients of the drop distribution function will vary with each situation. The Marshall-Palmer relationship is still the most quoted but it must be remembered that it is an average of many stratiform rain events in mid-latitudes. [4] The upper figure shows mean distributions of stratiform and convective rainfall. The linear part of the distributions can be adjusted with particular of the Marshall-Palmer distribution. The bottom one is a series of drop diameter distributions at several convective events in Florida with different precipitation rates. We can see that the experimental curves are more complex than the average ones, but the general appearance is the same.
Many other forms of distribution functions are therefore found in the meteorological literature to more precisely adjust the particle size to particular events. Over time researchers have realized that the distribution of drops is more of a problem of probability of producing drops of different diameters depending on the type of precipitation than a deterministic relationship. So there is a continuum of families of curves for stratiform rain, and another for convective rain. [4]
The Marshall and Palmer distribution uses an exponential function that does not simulate properly drops of very small diameters (the curve in the top figure). Several experiments have shown that the actual number of these droplets is less than the theoretical curve. Carlton W. Ulbrich developed a more general formula in 1983 taking into account that a drop is spherical if D <1 mm and an ellipsoid whose horizontal axis gets flattened as D gets larger. It is mechanically impossible to exceed D = 10 mm as the drop breaks at large diameters. From the general distribution, the diameter spectrum changes, μ = 0 inside the cloud, where the evaporation of small drops is negligible due to saturation conditions and μ = 2 out of the cloud, where the small drops evaporate because they are in drier air. With the same notation as before, we have for the drizzle the distribution of Ulbrich: [3]
Where is the liquid water content, water density, and 0.2 is an average value of the diameter in drizzle. For rain, introducing rainrate R (mm/h), the amount of rain per hour over a standard surface: [3]
The first measurements of this distribution were made by rather rudimentary tool by Palmer, Marshall's student, exposing a cardboard covered with flour to the rain for a short time. The mark left by each drop being proportional to its diameter, he could determine the distribution by counting the number of marks corresponding to each droplet size. This was immediately after the Second World War.
Different devices have been developed to get this distribution more accurately:
Knowledge of the distribution of raindrops in a cloud can be used to relate what is recorded by a weather radar to what is obtained on the ground as the amount of precipitation. We can to find the relation between the reflectivity of the radar echoes and what we measure with a device like the disdrometer.
The rainrate (R) is equal to number of particules (), their volume () and their falling speed ():
The radar reflectivity Z is:
Z and R having similar formulation, one can solve the equations to have a Z-R of the type: [5]
Where a and b are related to the type of precipitation (rain, snow, convective (like in thunderstorms) or stratiform (like from nimbostratus clouds) which have different , K, N0 and .
The best known of this relation is the Marshall-Palmer Z-R relationship which gives a = 200 and b = 1.6. [6] It is still one of the most used because it is valid for synoptic rain in mid-latitudes, a very common case. Other relationships were found for snow, rainstorm, tropical rain, etc. [6]
In mathematics, the representation theory of the symmetric group is a particular case of the representation theory of finite groups, for which a concrete and detailed theory can be obtained. This has a large area of potential applications, from symmetric function theory to quantum chemistry studies of atoms, molecules and solids.
In probability theory and statistics, the Laplace distribution is a continuous probability distribution named after Pierre-Simon Laplace. It is also sometimes called the double exponential distribution, because it can be thought of as two exponential distributions spliced together along the abscissa, although the term is also sometimes used to refer to the Gumbel distribution. The difference between two independent identically distributed exponential random variables is governed by a Laplace distribution, as is a Brownian motion evaluated at an exponentially distributed random time. Increments of Laplace motion or a variance gamma process evaluated over the time scale also have a Laplace distribution.
Variational Bayesian methods are a family of techniques for approximating intractable integrals arising in Bayesian inference and machine learning. They are typically used in complex statistical models consisting of observed variables as well as unknown parameters and latent variables, with various sorts of relationships among the three types of random variables, as might be described by a graphical model. As typical in Bayesian inference, the parameters and latent variables are grouped together as "unobserved variables". Variational Bayesian methods are primarily used for two purposes:
dBZ is a logarithmic dimensionless technical unit used in radar. It is mostly used in weather radar, to compare the equivalent reflectivity factor (Z) of a remote object to the return of a droplet of rain with a diameter of 1 mm. It is proportional to the number of drops per unit volume and the sixth power of drops' diameter and is thus used to estimate the rain or snow intensity. With other variables analyzed from the radar returns it helps to determine the type of precipitation. Both the radar reflectivity factor and its logarithmic version are commonly referred to as reflectivity when the context is clear. In short, the higher the dBZ value, the more likely it is for severe weather to occur in the form of precipitation.
In probability theory and statistics, the noncentral chi-squared distribution is a noncentral generalization of the chi-squared distribution. It often arises in the power analysis of statistical tests in which the null distribution is a chi-squared distribution; important examples of such tests are the likelihood-ratio tests.
In probability theory, the inverse Gaussian distribution is a two-parameter family of continuous probability distributions with support on (0,∞).
The sensitivity index or discriminability index or detectability index is a dimensionless statistic used in signal detection theory. A higher index indicates that the signal can be more readily detected.
Covariance matrix adaptation evolution strategy (CMA-ES) is a particular kind of strategy for numerical optimization. Evolution strategies (ES) are stochastic, derivative-free methods for numerical optimization of non-linear or non-convex continuous optimization problems. They belong to the class of evolutionary algorithms and evolutionary computation. An evolutionary algorithm is broadly based on the principle of biological evolution, namely the repeated interplay of variation and selection: in each generation (iteration) new individuals are generated by variation of the current parental individuals, usually in a stochastic way. Then, some individuals are selected to become the parents in the next generation based on their fitness or objective function value . Like this, individuals with better and better -values are generated over the generation sequence.
A ratio distribution is a probability distribution constructed as the distribution of the ratio of random variables having two other known distributions. Given two random variables X and Y, the distribution of the random variable Z that is formed as the ratio Z = X/Y is a ratio distribution.
In queueing theory, a discipline within the mathematical theory of probability, a G-network is an open network of G-queues first introduced by Erol Gelenbe as a model for queueing systems with specific control functions, such as traffic re-routing or traffic destruction, as well as a model for neural networks. A G-queue is a network of queues with several types of novel and useful customers:
In queueing theory, a discipline within the mathematical theory of probability, the Pollaczek–Khinchine formula states a relationship between the queue length and service time distribution Laplace transforms for an M/G/1 queue. The term is also used to refer to the relationships between the mean queue length and mean waiting/service time in such a model.
In probability and statistics, the Tweedie distributions are a family of probability distributions which include the purely continuous normal, gamma and inverse Gaussian distributions, the purely discrete scaled Poisson distribution, and the class of compound Poisson–gamma distributions which have positive mass at zero, but are otherwise continuous. Tweedie distributions are a special case of exponential dispersion models and are often used as distributions for generalized linear models.
In mathematics, the spectral theory of ordinary differential equations is the part of spectral theory concerned with the determination of the spectrum and eigenfunction expansion associated with a linear ordinary differential equation. In his dissertation, Hermann Weyl generalized the classical Sturm–Liouville theory on a finite closed interval to second order differential operators with singularities at the endpoints of the interval, possibly semi-infinite or infinite. Unlike the classical case, the spectrum may no longer consist of just a countable set of eigenvalues, but may also contain a continuous part. In this case the eigenfunction expansion involves an integral over the continuous part with respect to a spectral measure, given by the Titchmarsh–Kodaira formula. The theory was put in its final simplified form for singular differential equations of even degree by Kodaira and others, using von Neumann's spectral theorem. It has had important applications in quantum mechanics, operator theory and harmonic analysis on semisimple Lie groups.
In probability theory and statistics, the Poisson distribution is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time if these events occur with a known constant mean rate and independently of the time since the last event. It can also be used for the number of events in other types of intervals than time, and in dimension greater than 1.
In the mathematical theory of random matrices, the Marchenko–Pastur distribution, or Marchenko–Pastur law, describes the asymptotic behavior of singular values of large rectangular random matrices. The theorem is named after Soviet Ukrainian mathematicians Volodymyr Marchenko and Leonid Pastur who proved this result in 1967.
In probability theory and statistics, the noncentral beta distribution is a continuous probability distribution that is a noncentral generalization of the (central) beta distribution.
In mathematics, the Fox–Wright function (also known as Fox–Wright Psi function, not to be confused with Wright Omega function) is a generalisation of the generalised hypergeometric function pFq(z) based on ideas of Charles Fox (1928) and E. Maitland Wright (1935):
In probability theory and statistics, the normal-inverse-Wishart distribution is a multivariate four-parameter family of continuous probability distributions. It is the conjugate prior of a multivariate normal distribution with unknown mean and covariance matrix.
In the mathematical theory of probability, the voter model is an interacting particle system introduced by Richard A. Holley and Thomas M. Liggett in 1975.
In statistics and probability, the Neyman Type A distribution is a discrete probability distribution from the family of Compound Poisson distribution. First of all, to easily understand this distribution we will demonstrate it with the following example explained in Univariate Discret Distributions; we have a statistical model of the distribution of larvae in a unit area of field by assuming that the variation in the number of clusters of eggs per unit area could be represented by a Poisson distribution with parameter , while the number of larvae developing per cluster of eggs are assumed to have independent Poisson distribution all with the same parameter . If we want to know how many larvae there are, we define a random variable Y as the sum of the number of larvae hatched in each group. Therefore, Y = X1 + X2 + ... Xj, where X1,...,Xj are independent Poisson variables with parameter and .