Part of a series on |
Physical cosmology |
---|
In astronomy, a correlation function describes the distribution of galaxies in the universe. By default, "correlation function" refers to the two-point autocorrelation function. The two-point autocorrelation function is a function of one variable (distance); it describes the excess probability of finding two galaxies separated by this distance (excess over and above the probability that would arise if the galaxies were simply scattered independently and with uniform probability). It can be thought of as a "clumpiness" factor - the higher the value for some distance scale, the more "clumpy" the universe is at that distance scale.
The following definition (from Peebles 1980) is often cited:
However, it can only be correct in the statistical sense that it is averaged over a large number of galaxies chosen as the first, random galaxy. If just one random galaxy is chosen, then the definition is no longer correct, firstly because it is meaningless to talk of just one "random" galaxy, and secondly because the function will vary wildly depending on which galaxy is chosen, in contradiction with its definition as a function.
Assuming the universe is isotropic (which observations suggest), the correlation function is a function of a scalar distance. The two-point correlation function can then be written as where is a unitless measure of overdensity, defined at every point. Letting , it can also be expressed as the integral
The spatial correlation function is related to the Fourier space power spectrum of the galaxy distribution, , as
The n-point autocorrelation functions for n greater than 2 or cross-correlation functions for particular object types are defined similarly to the two-point autocorrelation function.
The correlation function is important for theoretical models of physical cosmology because it provides a means of testing models which assume different things about the contents of the universe.
Autocorrelation, sometimes known as serial correlation in the discrete time case, is the correlation of a signal with a delayed copy of itself as a function of delay. Informally, it is the similarity between observations of a random variable as a function of the time lag between them. The analysis of autocorrelation is a mathematical tool for finding repeating patterns, such as the presence of a periodic signal obscured by noise, or identifying the missing fundamental frequency in a signal implied by its harmonic frequencies. It is often used in signal processing for analyzing functions or series of values, such as time domain signals.
In mathematics and physics, Laplace's equation is a second-order partial differential equation named after Pierre-Simon Laplace, who first studied its properties. This is often written as or where is the Laplace operator, is the divergence operator, is the gradient operator, and is a twice-differentiable real-valued function. The Laplace operator therefore maps a scalar function to another scalar function.
In physics, a Langevin equation is a stochastic differential equation describing how a system evolves when subjected to a combination of deterministic and fluctuating ("random") forces. The dependent variables in a Langevin equation typically are collective (macroscopic) variables changing only slowly in comparison to the other (microscopic) variables of the system. The fast (microscopic) variables are responsible for the stochastic nature of the Langevin equation. One application is to Brownian motion, which models the fluctuating motion of a small particle in a fluid.
In mathematics, the Laplace operator or Laplacian is a differential operator given by the divergence of the gradient of a scalar function on Euclidean space. It is usually denoted by the symbols , (where is the nabla operator), or . In a Cartesian coordinate system, the Laplacian is given by the sum of second partial derivatives of the function with respect to each independent variable. In other coordinate systems, such as cylindrical and spherical coordinates, the Laplacian also has a useful form. Informally, the Laplacian Δf (p) of a function f at a point p measures by how much the average value of f over small spheres or balls centered at p deviates from f (p).
In the calculus of variations, a field of mathematical analysis, the functional derivative relates a change in a functional to a change in a function on which the functional depends.
In thermodynamics, the Onsager reciprocal relations express the equality of certain ratios between flows and forces in thermodynamic systems out of equilibrium, but where a notion of local equilibrium exists.
A directional derivative is a concept in multivariable calculus that measures the rate at which a function changes in a particular direction at a given point.
In statistics, propagation of uncertainty is the effect of variables' uncertainties on the uncertainty of a function based on them. When the variables are the values of experimental measurements they have uncertainties due to measurement limitations which propagate due to the combination of variables in the function.
In physics, mathematics and statistics, scale invariance is a feature of objects or laws that do not change if scales of length, energy, or other variables, are multiplied by a common factor, and thus represent a universality.
In physics, the Hamilton–Jacobi equation, named after William Rowan Hamilton and Carl Gustav Jacob Jacobi, is an alternative formulation of classical mechanics, equivalent to other formulations such as Newton's laws of motion, Lagrangian mechanics and Hamiltonian mechanics.
In probability theory and mathematical physics, a random matrix is a matrix-valued random variable—that is, a matrix in which some or all of its entries are sampled randomly from a probability distribution. Random matrix theory (RMT) is the study of properties of random matrices, often as they become large. RMT provides techniques like mean-field theory, diagrammatic methods, the cavity method, or the replica method to compute quantities like traces, spectral densities, or scalar products between eigenvectors. Many physical phenomena, such as the spectrum of nuclei of heavy atoms, the thermal conductivity of a lattice, or the emergence of quantum chaos, can be modeled mathematically as problems concerning large, random matrices.
Time-dependent density-functional theory (TDDFT) is a quantum mechanical theory used in physics and chemistry to investigate the properties and dynamics of many-body systems in the presence of time-dependent potentials, such as electric or magnetic fields. The effect of such fields on molecules and solids can be studied with TDDFT to extract features like excitation energies, frequency-dependent response properties, and photoabsorption spectra.
In physics, precisely in the study of the theory of general relativity and many alternatives to it, the post-Newtonian formalism is a calculational tool that expresses Einstein's (nonlinear) equations of gravity in terms of the lowest-order deviations from Newton's law of universal gravitation. This allows approximations to Einstein's equations to be made in the case of weak fields. Higher-order terms can be added to increase accuracy, but for strong fields, it may be preferable to solve the complete equations numerically. Some of these post-Newtonian approximations are expansions in a small parameter, which is the ratio of the velocity of the matter forming the gravitational field to the speed of light, which in this case is better called the speed of gravity. In the limit, when the fundamental speed of gravity becomes infinite, the post-Newtonian expansion reduces to Newton's law of gravity.
In electromagnetism, charge density is the amount of electric charge per unit length, surface area, or volume. Volume charge density is the quantity of charge per unit volume, measured in the SI system in coulombs per cubic meter (C⋅m−3), at any point in a volume. Surface charge density (σ) is the quantity of charge per unit area, measured in coulombs per square meter (C⋅m−2), at any point on a surface charge distribution on a two dimensional surface. Linear charge density (λ) is the quantity of charge per unit length, measured in coulombs per meter (C⋅m−1), at any point on a line charge distribution. Charge density can be either positive or negative, since electric charge can be either positive or negative.
In statistics, the Durbin–Watson statistic is a test statistic used to detect the presence of autocorrelation at lag 1 in the residuals from a regression analysis. It is named after James Durbin and Geoffrey Watson. The small sample distribution of this ratio was derived by John von Neumann. Durbin and Watson applied this statistic to the residuals from least squares regressions, and developed bounds tests for the null hypothesis that the errors are serially uncorrelated against the alternative that they follow a first order autoregressive process. Note that the distribution of this test statistic does not depend on the estimated regression coefficients and the variance of the errors.
In many-body theory, the term Green's function is sometimes used interchangeably with correlation function, but refers specifically to correlators of field operators or creation and annihilation operators.
The matter power spectrum describes the density contrast of the universe as a function of scale. It is the Fourier transform of the matter correlation function. On large scales, gravity competes with cosmic expansion, and structures grow according to linear theory. In this regime, the density contrast field is Gaussian, Fourier modes evolve independently, and the power spectrum is sufficient to completely describe the density field. On small scales, gravitational collapse is non-linear, and can only be computed accurately using N-body simulations. Higher-order statistics are necessary to describe the full field at small scales.
In cosmology, baryon acoustic oscillations (BAO) are fluctuations in the density of the visible baryonic matter of the universe, caused by acoustic density waves in the primordial plasma of the early universe. In the same way that supernovae provide a "standard candle" for astronomical observations, BAO matter clustering provides a "standard ruler" for length scale in cosmology. The length of this standard ruler is given by the maximum distance the acoustic waves could travel in the primordial plasma before the plasma cooled to the point where it became neutral atoms, which stopped the expansion of the plasma density waves, "freezing" them into place. The length of this standard ruler can be measured by looking at the large scale structure of matter using astronomical surveys. BAO measurements help cosmologists understand more about the nature of dark energy by constraining cosmological parameters.
The Press–Schechter formalism is a mathematical model for predicting the number of objects of a certain mass within a given volume of the Universe. It was described in an academic paper by William H. Press and Paul Schechter in 1974.
Fluctuation X-ray scattering (FXS) is an X-ray scattering technique similar to small-angle X-ray scattering (SAXS), but is performed using X-ray exposures below sample rotational diffusion times. This technique, ideally performed with an ultra-bright X-ray light source, such as a free electron laser, results in data containing significantly more information as compared to traditional scattering methods.