In mathematics, error analysis is the study of kind and quantity of error, or uncertainty, that may be present in the solution to a problem. This issue is particularly prominent in applied areas such as numerical analysis and statistics.
In numerical simulation or modeling of real systems, error analysis is concerned with the changes in the output of the model as the parameters to the model vary about a mean.
For instance, in a system modeled as a function of two variables Error analysis deals with the propagation of the numerical errors in and (around mean values and ) to error in (around a mean ). [1]
In numerical analysis, error analysis comprises both forward error analysis and backward error analysis.
Forward error analysis involves the analysis of a function which is an approximation (usually a finite polynomial) to a function to determine the bounds on the error in the approximation; i.e., to find such that The evaluation of forward errors is desired in validated numerics. [2]
Backward error analysis involves the analysis of the approximation function to determine the bounds on the parameters such that the result [3]
Backward error analysis, the theory of which was developed and popularized by James H. Wilkinson, can be used to establish that an algorithm implementing a numerical function is numerically stable. [4] The basic approach is to show that although the calculated result, due to roundoff errors, will not be exactly correct, it is the exact solution to a nearby problem with slightly perturbed input data. If the perturbation required is small, on the order of the uncertainty in the input data, then the results are in some sense as accurate as the data "deserves". The algorithm is then defined as backward stable . Stability is a measure of the sensitivity to rounding errors of a given numerical procedure; by contrast, the condition number of a function for a given problem indicates the inherent sensitivity of the function to small perturbations in its input and is independent of the implementation used to solve the problem. [5]
The analysis of errors computed using the global positioning system is important for understanding how GPS works, and for knowing what magnitude errors should be expected. The Global Positioning System makes corrections for receiver clock errors and other effects but there are still residual errors which are not corrected. The Global Positioning System (GPS) was created by the United States Department of Defense (DOD) in the 1970s. It has come to be widely used for navigation both by the U.S. military and the general public.
In molecular dynamics (MD) simulations, there are errors due to inadequate sampling of the phase space or infrequently occurring events, these lead to the statistical error due to random fluctuation in the measurements.
For a series of M measurements of a fluctuating property A, the mean value is:
When these M measurements are independent, the variance of the mean ⟨A⟩ is:
but in most MD simulations, there is correlation between quantity A at different time, so the variance of the mean ⟨A⟩ will be underestimated as the effective number of independent measurements is actually less than M. In such situations we rewrite the variance as:
where is the autocorrelation function defined by
We can then use the auto correlation function to estimate the error bar. Luckily, we have a much simpler method based on block averaging. [6]
Measurements generally have a small amount of error, and repeated measurements of the same item will generally result in slight differences in readings. These differences can be analyzed, and follow certain known mathematical and statistical properties. Should a set of data appear to be too faithful to the hypothesis, i.e., the amount of error that would normally be in such measurements does not appear, a conclusion can be drawn that the data may have been forged. Error analysis alone is typically not sufficient to prove that data have been falsified or fabricated, but it may provide the supporting evidence necessary to confirm suspicions of misconduct.
In probability theory, a normal distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is
In quantum mechanics, the uncertainty principle is any of a variety of mathematical inequalities asserting a fundamental limit to the accuracy with which the values for certain pairs of physical quantities of a particle, such as position, x, and momentum, p, can be predicted from initial conditions.
In probability theory, the central limit theorem (CLT) establishes that, in many situations, when independent random variables are summed up, their properly normalized sum tends toward a normal distribution even if the original variables themselves are not normally distributed. The theorem is a key concept in probability theory because it implies that probabilistic and statistical methods that work for normal distributions can be applicable to many problems involving other types of distributions. This theorem has seen many changes during the formal development of probability theory. Previous versions of the theorem date back to 1811, but in its modern general form, this fundamental result in probability theory was precisely stated as late as 1920, thereby serving as a bridge between classical and modern probability theory.
The Allan variance (AVAR), also known as two-sample variance, is a measure of frequency stability in clocks, oscillators and amplifiers. It is named after David W. Allan and expressed mathematically as . The Allan deviation (ADEV), also known as sigma-tau, is the square root of the Allan variance, .
In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. One definition is that a random vector is said to be k-variate normally distributed if every linear combination of its k components has a univariate normal distribution. Its importance derives mainly from the multivariate central limit theorem. The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly) correlated real-valued random variables each of which clusters around a mean value.
In probability theory and statistics, a covariance matrix is a square matrix giving the covariance between each pair of elements of a given random vector. Any covariance matrix is symmetric and positive semi-definite and its main diagonal contains variances.
In statistics, propagation of uncertainty is the effect of variables' uncertainties on the uncertainty of a function based on them. When the variables are the values of experimental measurements they have uncertainties due to measurement limitations which propagate due to the combination of variables in the function.
A granular material is a conglomeration of discrete solid, macroscopic particles characterized by a loss of energy whenever the particles interact. The constituents that compose granular material are large enough such that they are not subject to thermal motion fluctuations. Thus, the lower size limit for grains in granular material is about 1 μm. On the upper size limit, the physics of granular materials may be applied to ice floes where the individual grains are icebergs and to asteroid belts of the Solar System with individual grains being asteroids.
A rotational transition is an abrupt change in angular momentum in quantum physics. Like all other properties of a quantum particle, angular momentum is quantized, meaning it can only equal certain discrete values, which correspond to different rotational energy states. When a particle loses angular momentum, it is said to have transitioned to a lower rotational energy state. Likewise, when a particle gains angular momentum, a positive rotational transition is said to have occurred.
In quantum field theory, the LSZ reduction formula is a method to calculate S-matrix elements from the time-ordered correlation functions of a quantum field theory. It is a step of the path that starts from the Lagrangian of some quantum field theory and leads to prediction of measurable quantities. It is named after the three German physicists Harry Lehmann, Kurt Symanzik and Wolfhart Zimmermann.
In probability theory and directional statistics, the von Mises distribution is a continuous probability distribution on the circle. It is a close approximation to the wrapped normal distribution, which is the circular analogue of the normal distribution. A freely diffusing angle on a circle is a wrapped normally distributed random variable with an unwrapped variance that grows linearly in time. On the other hand, the von Mises distribution is the stationary distribution of a drift and diffusion process on the circle in a harmonic potential, i.e. with a preferred orientation. The von Mises distribution is the maximum entropy distribution for circular data when the real and imaginary parts of the first circular moment are specified. The von Mises distribution is a special case of the von Mises–Fisher distribution on the N-dimensional sphere.
In 3-dimensional topology, a part of the mathematical field of geometric topology, the Casson invariant is an integer-valued invariant of oriented integral homology 3-spheres, introduced by Andrew Casson.
In probability and statistics, the truncated normal distribution is the probability distribution derived from that of a normally distributed random variable by bounding the random variable from either below or above. The truncated normal distribution has wide applications in statistics and econometrics. For example, it is used to model the probabilities of the binary outcomes in the probit model and to model censored data in the tobit model.
In applied mathematics, the numerical sign problem is the problem of numerically evaluating the integral of a highly oscillatory function of a large number of variables. Numerical methods fail because of the near-cancellation of the positive and negative contributions to the integral. Each has to be integrated to very high precision in order for their difference to be obtained with useful accuracy.
In probability theory and directional statistics, a wrapped normal distribution is a wrapped probability distribution that results from the "wrapping" of the normal distribution around the unit circle. It finds application in the theory of Brownian motion and is a solution to the heat equation for periodic boundary conditions. It is closely approximated by the von Mises distribution, which, due to its mathematical simplicity and tractability, is the most commonly used distribution in directional statistics.
In mathematics, the Pettis integral or Gelfand–Pettis integral, named after Israel M. Gelfand and Billy James Pettis, extends the definition of the Lebesgue integral to vector-valued functions on a measure space, by exploiting duality. The integral was introduced by Gelfand for the case when the measure space is an interval with Lebesgue measure. The integral is also called the weak integral in contrast to the Bochner integral, which is the strong integral.
Experimental uncertainty analysis is a technique that analyses a derived quantity, based on the uncertainties in the experimentally measured quantities that are used in some form of mathematical relationship ("model") to calculate that derived quantity. The model used to convert the measurements into the derived quantity is usually based on fundamental principles of a science or engineering discipline.
In statistical mechanics, the Griffiths inequality, sometimes also called Griffiths–Kelly–Sherman inequality or GKS inequality, named after Robert B. Griffiths, is a correlation inequality for ferromagnetic spin systems. Informally, it says that in ferromagnetic spin systems, if the 'a-priori distribution' of the spin is invariant under spin flipping, the correlation of any monomial of the spins is non-negative; and the two point correlation of two monomial of the spins is non-negative.
Functional principal component analysis (FPCA) is a statistical method for investigating the dominant modes of variation of functional data. Using this method, a random function is represented in the eigenbasis, which is an orthonormal basis of the Hilbert space L2 that consists of the eigenfunctions of the autocovariance operator. FPCA represents functional data in the most parsimonious way, in the sense that when using a fixed number of basis functions, the eigenfunction basis explains more variation than any other basis expansion. FPCA can be applied for representing random functions, or in functional regression and classification.
The two-dimensional critical Ising model is the critical limit of the Ising model in two dimensions. It is a two-dimensional conformal field theory whose symmetry algebra is the Virasoro algebra with the central charge . Correlation functions of the spin and energy operators are described by the minimal model. While the minimal model has been exactly solved, the solution does not cover other observables such as connectivities of clusters.
{{cite book}}
: |author2=
has generic name (help)