The frequency of exceedance, sometimes called the annual rate of exceedance, is the frequency with which a random process exceeds some critical value. Typically, the critical value is far from the mean. It is usually defined in terms of the number of peaks of the random process that are outside the boundary. It has applications related to predicting extreme events, such as major earthquakes and floods.
The frequency of exceedance is the number of times a stochastic process exceeds some critical value, usually a critical value far from the process' mean, per unit time. [1] Counting exceedance of the critical value can be accomplished either by counting peaks of the process that exceed the critical value [1] or by counting upcrossings of the critical value, where an upcrossing is an event where the instantaneous value of the process crosses the critical value with positive slope. [1] [2] This article assumes the two methods of counting exceedance are equivalent and that the process has one upcrossing and one peak per exceedance. However, processes, especially continuous processes with high frequency components to their power spectral densities, may have multiple upcrossings or multiple peaks in rapid succession before the process reverts to its mean. [3]
Consider a scalar, zero-mean Gaussian process y(t) with variance σy2 and power spectral density Φy(f), where f is a frequency. Over time, this Gaussian process has peaks that exceed some critical value ymax > 0. Counting the number of upcrossings of ymax, the frequency of exceedance of ymax is given by [1] [2]
N0 is the frequency of upcrossings of 0 and is related to the power spectral density as
For a Gaussian process, the approximation that the number of peaks above the critical value and the number of upcrossings of the critical value are the same is good for ymax/σy > 2 and for narrow band noise. [1]
For power spectral densities that decay less steeply than f−3 as f→∞, the integral in the numerator of N0 does not converge. Hoblit gives methods for approximating N0 in such cases with applications aimed at continuous gusts. [4]
As the random process evolves over time, the number of peaks that exceeded the critical value ymax grows and is itself a counting process. For many types of distributions of the underlying random process, including Gaussian processes, the number of peaks above the critical value ymax converges to a Poisson process as the critical value becomes arbitrarily large. The interarrival times of this Poisson process are exponentially distributed with rate of decay equal to the frequency of exceedance N(ymax). [5] Thus, the mean time between peaks, including the residence time or mean time before the very first peak, is the inverse of the frequency of exceedance N−1(ymax).
If the number of peaks exceeding ymax grows as a Poisson process, then the probability that at time t there has not yet been any peak exceeding ymax is e−N(ymax)t. [6] Its complement,
is the probability of exceedance, the probability that ymax has been exceeded at least once by time t. [7] [8] This probability can be useful to estimate whether an extreme event will occur during a specified time period, such as the lifespan of a structure or the duration of an operation.
If N(ymax)t is small, for example for the frequency of a rare event occurring in a short time period, then
Under this assumption, the frequency of exceedance is equal to the probability of exceedance per unit time, pex/t, and the probability of exceedance can be computed by simply multiplying the frequency of exceedance by the specified length of time.
In probability theory and statistics, the negative binomial distribution is a discrete probability distribution that models the number of failures in a sequence of independent and identically distributed Bernoulli trials before a specified (non-random) number of successes occurs. For example, we can define rolling a 6 on a dice as a success, and rolling any other number as a failure, and ask how many failure rolls will occur before we see the third success. In such a case, the probability distribution of the number of failures that appear will be a negative binomial distribution.
The power spectrum of a time series describes the distribution of power into frequency components composing that signal. According to Fourier analysis, any physical signal can be decomposed into a number of discrete frequencies, or a spectrum of frequencies over a continuous range. The statistical average of a certain signal or sort of signal as analyzed in terms of its frequency content, is called its spectrum.
In probability theory and statistics, a Gaussian process is a stochastic process, such that every finite collection of those random variables has a multivariate normal distribution, i.e. every finite linear combination of them is normally distributed. The distribution of a Gaussian process is the joint distribution of all those random variables, and as such, it is a distribution over functions with a continuous domain, e.g. time or space.
Empirical Bayes methods are procedures for statistical inference in which the prior probability distribution is estimated from the data. This approach stands in contrast to standard Bayesian methods, for which the prior distribution is fixed before any data are observed. Despite this difference in perspective, empirical Bayes may be viewed as an approximation to a fully Bayesian treatment of a hierarchical model wherein the parameters at the highest level of the hierarchy are set to their most likely values, instead of being integrated out. Empirical Bayes, also known as maximum marginal likelihood, represents a convenient approach for setting hyperparameters, but has been mostly supplanted by fully Bayesian hierarchical analyses since the 2000s with the increasing availability of well-performing computation techniques. It is still commonly used, however, for variational methods in Deep Learning, such as variational autoencoders, where latent variable spaces are high-dimensional.
In statistics, a mixture model is a probabilistic model for representing the presence of subpopulations within an overall population, without requiring that an observed data set should identify the sub-population to which an individual observation belongs. Formally a mixture model corresponds to the mixture distribution that represents the probability distribution of observations in the overall population. However, while problems associated with "mixture distributions" relate to deriving the properties of the overall population from those of the sub-populations, "mixture models" are used to make statistical inferences about the properties of the sub-populations given only observations on the pooled population, without sub-population identity information.
In probability theory, a Lévy process, named after the French mathematician Paul Lévy, is a stochastic process with independent, stationary increments: it represents the motion of a point whose successive displacements are random, in which displacements in pairwise disjoint time intervals are independent, and displacements in different time intervals of the same length have identical probability distributions. A Lévy process may thus be viewed as the continuous-time analog of a random walk.
The Skellam distribution is the discrete probability distribution of the difference of two statistically independent random variables and each Poisson-distributed with respective expected values and . It is useful in describing the statistics of the difference of two images with simple photon noise, as well as describing the point spread distribution in sports where all scored points are equal, such as baseball, hockey and soccer.
In probability theory and mathematical physics, a random matrix is a matrix-valued random variable—that is, a matrix in which some or all elements are random variables. Many important properties of physical systems can be represented mathematically as matrix problems. For example, the thermal conductivity of a lattice can be computed from the dynamical matrix of the particle-particle interactions within the lattice.
Renewal theory is the branch of probability theory that generalizes the Poisson process for arbitrary holding times. Instead of exponentially distributed holding times, a renewal process may have any independent and identically distributed (IID) holding times that have finite mean. A renewal-reward process additionally has a random sequence of rewards incurred at each holding time, which are IID but need not be independent of the holding times.
In statistics and probability theory, a point process or point field is a collection of mathematical points randomly located on a mathematical space such as the real line or Euclidean space. Point processes can be used for spatial data analysis, which is of interest in such diverse disciplines as forestry, plant ecology, epidemiology, geography, seismology, materials science, astronomy, telecommunications, computational neuroscience, economics and others.
In mathematics – specifically, in the theory of stochastic processes – Doob's martingale convergence theorems are a collection of results on the limits of supermartingales, named after the American mathematician Joseph L. Doob. Informally, the martingale convergence theorem typically refers to the result that any supermartingale satisfying a certain boundedness condition must converge. One may think of supermartingales as the random variable analogues of non-increasing sequences; from this perspective, the martingale convergence theorem is a random variable analogue of the monotone convergence theorem, which states that any bounded monotone sequence converges. There are symmetric results for submartingales, which are analogous to non-decreasing sequences.
Self-similar processes are types of stochastic processes that exhibit the phenomenon of self-similarity. A self-similar phenomenon behaves the same when viewed at different degrees of magnification, or different scales on a dimension. Self-similar processes can sometimes be described using heavy-tailed distributions, also known as long-tailed distributions. Examples of such processes include traffic processes, such as packet inter-arrival times and burst lengths. Self-similar processes can exhibit long-range dependency.
In probability and statistics, the Tweedie distributions are a family of probability distributions which include the purely continuous normal, gamma and inverse Gaussian distributions, the purely discrete scaled Poisson distribution, and the class of compound Poisson–gamma distributions which have positive mass at zero, but are otherwise continuous. Tweedie distributions are a special case of exponential dispersion models and are often used as distributions for generalized linear models.
In probability theory and statistics, the Poisson distribution is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time or space if these events occur with a known constant mean rate and independently of the time since the last event. It is named after French mathematician Siméon Denis Poisson. The Poisson distribution can also be used for the number of events in other specified interval types such as distance, area, or volume. It plays an important role for discrete-stable distributions.
In statistics, a zero-inflated model is a statistical model based on a zero-inflated probability distribution, i.e. a distribution that allows for frequent zero-valued observations.
Continuous gusts or stochastic gusts are winds that vary randomly in space and time. Models of continuous gusts are used to represent atmospheric turbulence, especially clear air turbulence and turbulent winds in storms. The Federal Aviation Administration (FAA) and the United States Department of Defense provide requirements for the models of continuous gusts used in design and simulation of aircraft.
In probability, statistics and related fields, a Poisson point process is a type of random mathematical object that consists of points randomly located on a mathematical space with the essential feature that the points occur independently of one another. The Poisson point process is often called simply the Poisson process, but it is also called a Poisson random measure, Poisson random point field or Poisson point field. This point process has convenient mathematical properties, which has led to its being frequently defined in Euclidean space and used as a mathematical model for seemingly random processes in numerous disciplines such as astronomy, biology, ecology, geology, seismology, physics, economics, image processing, and telecommunications.
In statistics, the residence time is the average amount of time it takes for a random process to reach a certain boundary value, usually a boundary far from the mean.