In probability theory, fractional Brownian motion (fBm), also called a fractal Brownian motion, is a generalization of Brownian motion. Unlike classical Brownian motion, the increments of fBm need not be independent. fBm is a continuous-time Gaussian process on , that starts at zero, has expectation zero for all in , and has the following covariance function:
where H is a real number in (0, 1), called the Hurst index or Hurst parameter associated with the fractional Brownian motion. The Hurst exponent describes the raggedness of the resultant motion, with a higher value leading to a smoother motion. It was introduced by Mandelbrot & van Ness (1968).
The value of H determines what kind of process the fBm is:
Fractional Brownian motion has stationary increments X(t) = BH(s+t) −BH(s) (the value is the same for any s). The increment process X(t) is known as fractional Gaussian noise.
There is also a generalization of fractional Brownian motion: n-th order fractional Brownian motion, abbreviated as n-fBm. [1] n-fBm is a Gaussian, self-similar, non-stationary process whose increments of order n are stationary. For n = 1, n-fBm is classical fBm.
Like the Brownian motion that it generalizes, fractional Brownian motion is named after 19th century biologist Robert Brown; fractional Gaussian noise is named after mathematician Carl Friedrich Gauss.
Prior to the introduction of the fractional Brownian motion, Lévy (1953) used the Riemann–Liouville fractional integral to define the process
where integration is with respect to the white noise measure dB(s). This integral turns out to be ill-suited as a definition of fractional Brownian motion because of its over-emphasis of the origin ( Mandelbrot & van Ness 1968 , p. 424). It does not have stationary increments.
The idea instead is to use a different fractional integral of white noise to define the process: the Weyl integral
for t > 0 (and similarly for t < 0). The resulting process has stationary increments.
The main difference between fractional Brownian motion and regular Brownian motion is that while the increments in Brownian Motion are independent, increments for fractional Brownian motion are not. If H > 1/2, then there is positive autocorrelation: if there is an increasing pattern in the previous steps, then it is likely that the current step will be increasing as well. If H < 1/2, the autocorrelation is negative.
The process is self-similar, since in terms of probability distributions:
This property is due to the fact that the covariance function is homogeneous of order 2H and can be considered as a fractal property. FBm can also be defined as the unique mean-zero Gaussian process, null at the origin, with stationary and self-similar increments.
It has stationary increments:
For H > 1/2 the process exhibits long-range dependence,
Sample-paths are almost nowhere differentiable. However, almost-all trajectories are locally Hölder continuous of any order strictly less than H: for each such trajectory, for every T > 0 and for every ε > 0 there exists a (random) constant c such that
for 0 < s,t < T.
With probability 1, the graph of BH(t) has both Hausdorff dimension [2] and box dimension [3] of 2−H.
As for regular Brownian motion, one can define stochastic integrals with respect to fractional Brownian motion, usually called "fractional stochastic integrals". In general though, unlike integrals with respect to regular Brownian motion, fractional stochastic integrals are not semimartingales.
Just as Brownian motion can be viewed as white noise filtered by (i.e. integrated), fractional Brownian motion is white noise filtered by (corresponding to fractional integration).
Practical computer realisations of an fBm can be generated, [4] [5] although they are only a finite approximation. The sample paths chosen can be thought of as showing discrete sampled points on an fBm process. Three realizations are shown below, each with 1000 points of an fBm with Hurst parameter 0.75.
Realizations of three different types of fBm are shown below, each showing 1000 points, the first with Hurst parameter 0.15, the second with Hurst parameter 0.55, and the third with Hurst parameter 0.95. The higher the Hurst parameter is, the smoother the curve will be.
One can simulate sample-paths of an fBm using methods for generating stationary Gaussian processes with known covariance function. The simplest method relies on the Cholesky decomposition method of the covariance matrix (explained below), which on a grid of size has complexity of order . A more complex, but computationally faster method is the circulant embedding method of Dietrich & Newsam (1997).
Suppose we want to simulate the values of the fBM at times using the Cholesky decomposition method.
In order to compute , we can use for instance the Cholesky decomposition method. An alternative method uses the eigenvalues of :
Note that the result is real-valued because .
Note that since the eigenvectors are linearly independent, the matrix is invertible.
It is also known that [6]
where B is a standard Brownian motion and
Where is the Euler hypergeometric integral.
Say we want to simulate an fBm at points .
The integral may be efficiently computed by Gaussian quadrature.
In physics, the Lorentz transformations are a six-parameter family of linear transformations from a coordinate frame in spacetime to another frame that moves at a constant velocity relative to the former. The respective inverse transformation is then parameterized by the negative of this velocity. The transformations are named after the Dutch physicist Hendrik Lorentz.
In probability theory and statistics, the Weibull distribution is a continuous probability distribution. It models a broad range of random variables, largely in the nature of a time to failure or time between events. Examples are maximum one-day rainfalls and the time a user spends on a web page.
In probability theory and statistics, a Gaussian process is a stochastic process, such that every finite collection of those random variables has a multivariate normal distribution. The distribution of a Gaussian process is the joint distribution of all those random variables, and as such, it is a distribution over functions with a continuous domain, e.g. time or space.
In probability theory and related fields, Malliavin calculus is a set of mathematical techniques and ideas that extend the mathematical field of calculus of variations from deterministic functions to stochastic processes. In particular, it allows the computation of derivatives of random variables. Malliavin calculus is also called the stochastic calculus of variations. P. Malliavin first initiated the calculus on infinite dimensional space. Then, the significant contributors such as S. Kusuoka, D. Stroock, J-M. Bismut, Shinzo Watanabe, I. Shigekawa, and so on finally completed the foundations.
In mathematics, the discrete Laplace operator is an analog of the continuous Laplace operator, defined so that it has meaning on a graph or a discrete grid. For the case of a finite-dimensional graph, the discrete Laplace operator is more commonly called the Laplacian matrix.
In differential geometry, a spray is a vector field H on the tangent bundle TM that encodes a quasilinear second order system of ordinary differential equations on the base manifold M. Usually a spray is required to be homogeneous in the sense that its integral curves t→ΦHt(ξ)∈TM obey the rule ΦHt(λξ)=ΦHλt(ξ) in positive re-parameterizations. If this requirement is dropped, H is called a semi-spray.
In probability theory and mathematical physics, a random matrix is a matrix-valued random variable—that is, a matrix in which some or all of its entries are sampled randomly from a probability distribution. Random matrix theory (RMT) is the study of properties of random matrices, often as they become large. RMT provides techniques like mean-field theory, diagrammatic methods, the cavity method, or the replica method to compute quantities like traces, spectral densities, or scalar products between eigenvectors. Many physical phenomena, such as the spectrum of nuclei of heavy atoms, the thermal conductivity of a lattice, or the emergence of quantum chaos, can be modeled mathematically as problems concerning large, random matrices.
In Bayesian statistics, the Jeffreys prior is a non-informative prior distribution for a parameter space. Named after Sir Harold Jeffreys, its density function is proportional to the square root of the determinant of the Fisher information matrix:
In mathematics, the Ornstein–Uhlenbeck process is a stochastic process with applications in financial mathematics and the physical sciences. Its original application in physics was as a model for the velocity of a massive Brownian particle under the influence of friction. It is named after Leonard Ornstein and George Eugene Uhlenbeck.
The principle of detailed balance can be used in kinetic systems which are decomposed into elementary processes. It states that at equilibrium, each elementary process is in equilibrium with its reverse process.
In probability theory, the inverse Gaussian distribution is a two-parameter family of continuous probability distributions with support on (0,∞).
In computer networks, self-similarity is a feature of network data transfer dynamics. When modeling network data dynamics the traditional time series models, such as an autoregressive moving average model are not appropriate. This is because these models only provide a finite number of parameters in the model and thus interaction in a finite time window, but the network data usually have a long-range dependent temporal structure. A self-similar process is one way of modeling network data dynamics with such a long range correlation. This article defines and describes network data transfer dynamics in the context of a self-similar process. Properties of the process are shown and methods are given for graphing and estimating parameters modeling the self-similarity of network data.
In mathematics, the Fox–Wright function (also known as Fox–Wright Psi function, not to be confused with Wright Omega function) is a generalisation of the generalised hypergeometric function pFq(z) based on ideas of Charles Fox (1928) and E. Maitland Wright (1935):
In probability theory and statistics, the normal-inverse-Wishart distribution is a multivariate four-parameter family of continuous probability distributions. It is the conjugate prior of a multivariate normal distribution with unknown mean and covariance matrix.
In probability theory, a subordinator is a stochastic process that is non-negative and whose increments are stationary and independent. Subordinators are a special class of Lévy process that play an important role in the theory of local time. In this context, subordinators describe the evolution of time within another stochastic process, the subordinated stochastic process. In other words, a subordinator will determine the random number of "time steps" that occur within the subordinated process for a given unit of chronological time.
Maximal entropy random walk (MERW) is a popular type of biased random walk on a graph, in which transition probabilities are chosen accordingly to the principle of maximum entropy, which says that the probability distribution which best represents the current state of knowledge is the one with largest entropy. While standard random walk chooses for every vertex uniform probability distribution among its outgoing edges, locally maximizing entropy rate, MERW maximizes it globally by assuming uniform probability distribution among all paths in a given graph.
In theoretical physics, the dual graviton is a hypothetical elementary particle that is a dual of the graviton under electric-magnetic duality, as an S-duality, predicted by some formulations of eleven-dimensional supergravity.
In statistics, the complex Wishart distribution is a complex version of the Wishart distribution. It is the distribution of times the sample Hermitian covariance matrix of zero-mean independent Gaussian random variables. It has support for Hermitian positive definite matrices.
Batch normalization is a method used to make training of artificial neural networks faster and more stable through normalization of the layers' inputs by re-centering and re-scaling. It was proposed by Sergey Ioffe and Christian Szegedy in 2015.
An additive process, in probability theory, is a cadlag, continuous in probability stochastic process with independent increments. An additive process is the generalization of a Lévy process. An example of an additive process that is not a Lévy process is a Brownian motion with a time-dependent drift. The additive process was introduced by Paul Lévy in 1937.