Part of a series on |
Physical cosmology |
---|
An inhomogeneous cosmology is a physical cosmological theory (an astronomical model of the physical universe's origin and evolution) which, unlike the dominant cosmological concordance model, assumes that inhomogeneities in the distribution of matter across the universe affect local gravitational forces (i.e., at the galactic level) enough to skew our view of the Universe. [3] When the universe began, matter was distributed homogeneously, but over billions of years, galaxies, clusters of galaxies, and superclusters coalesced. Einstein's theory of general relativity states that they warp the space-time around them.
While the concordance model acknowledges this fact, it assumes that such inhomogeneities are not sufficient to affect large-scale averages of gravity observations. Two studies [4] [5] claimed in 1998-1999 that high redshift supernovae were further away than the distance predicted by calculations. It was suggested that the expansion of the universe was accelerating, and dark energy, a repulsive energy inherent in space, was proposed as an explanation. Dark energy became widely accepted, but remains unexplained. Inhomogeneous cosmology falls into the class of models that might not require dark energy.
Inhomogeneous cosmologies assume that the backreactions of denser structures and those of empty voids on space-time are significant. When not neglected, they distort understanding of time and observations of distant objects. Burchert's equations in 1997 and 2000 derive from general relativity, but allow for the inclusion of local gravitational variations. Alternative models were proposed under which the acceleration of the universe was a misinterpretation of astronomical observations and in which dark energy is unnecessary. [6] [7] For example, in 2007, David Wiltshire proposed a model (timescape cosmology) in which backreactions caused time to run more slowly or, in voids, more quickly, thus leading supernovae observed in 1998 to be thought to be further away than they were. [8] [9] Timescape cosmology may also imply that the expansion of the universe is in fact slowing. [3]
The conflict between the two cosmologies derives from the inflexibility of Einstein's theory of general relativity, which shows how gravity is formed by the interaction of matter, space, and time. [10] Physicist John Wheeler famously summed up the theory's essence as "Matter tells space how to curve; space tells matter how to move." [11] However, in order to build a workable cosmological model, all of the terms on both sides of Einstein's equations must be balanced: on one side, matter (i.e., all the things that warp time and space); on the other, the curvature of the universe and the speed at which space-time is expanding. [10] In short, a model requires a particular amount of matter in order to produce particular curvatures and expansion rates.
In terms of matter, all modern cosmologies are founded on the cosmological principle, which states that whichever direction we look from Earth, the universe is basically the same: homogeneous and isotropic (uniform in all dimensions). [10] This principle grew out of Copernicus's assertion that there were no special observers in the universe and nothing special about the Earth's location in the universe (i.e., Earth was not the center of the universe, as previously thought). Since the publication of general relativity in 1915, this homogeneity and isotropy have greatly simplified the process of devising cosmological models.
In terms of the curvature of space-time and the shape of the universe, it can theoretically be closed (positive curvature, or space-time folding in itself as though on a four-dimensional sphere's surface), open (negative curvature, with space-time folding outward), or flat (zero curvature, like the surface of a "flat" four-dimensional piece of paper). [10]
The first real difficulty came with regards to expansion, for in 1915, as previously, the universe was assumed to be static, neither expanding nor contracting. All of Einstein's solutions to his equations in general relativity, however, predicted a dynamic universe. Therefore, in order to make his equations consistent with the apparently static universe, he added a cosmological constant, a term representing some unexplained extra energy. But when in the late 1920s Georges Lemaître's and Edwin Hubble's observations proved Alexander Friedmann's notion (derived from general relativity) that the universe was expanding, the cosmological constant became unnecessary, Einstein calling it "my greatest blunder." [10]
With this term gone from the equation, others derived the Friedmann-Lemaître–Robertson–Walker (FLRW) solution to describe such an expanding universe — a solution built on the assumption of a flat, isotropic, homogeneous universe. The FLRW model became the foundation of the standard model of a universe created by the Big Bang, and further observational evidence has helped to refine it. For example, a smooth, mostly homogeneous, and (at least when it was almost 400,000 years old) flat universe seemed to be confirmed by data from the cosmic microwave background (CMB). And after galaxies and clusters of galaxies were found in the 1970s, mainly by Vera Rubin, to be rotating faster than they should without flying apart, the existence of dark matter seemed also proven, confirming its inference by Jacobus Kapteyn, Jan Oort, and Fritz Zwicky in the 1920s and 1930s and demonstrating the flexibility of the standard model. Dark matter is believed to make up roughly 23% of the energy density of the universe. [10]
Another observation in 1998 seemed to complicate the situation further: two separate studies [4] [5] found distant supernovae to be fainter than expected in a steadily expanding universe; that is, they were not merely moving away from the earth but accelerating. The universe's expansion was calculated to have been accelerating since approximately 5 billion years ago. Given the gravitation braking effect that all the matter of the universe should have had on this expansion, a variation of Einstein's cosmological constant was reintroduced to represent an energy inherent in space, balancing the equations for a flat, accelerating universe. It also gave Einstein's cosmological constant new meaning, for by reintroducing it into the equation to represent dark energy, a flat universe expanding ever faster can be reproduced. [10]
Although the nature of this energy has yet to be adequately explained, it makes up almost 70% of the energy density of the universe in the concordance model. And thus, when including dark matter, almost 95% of the universe's energy density is explained by phenomena that have been inferred but not entirely explained nor directly observed. Most cosmologists still accept the concordance model, although science journalist Anil Ananthaswamy calls this agreement a "wobbly orthodoxy." [10]
While the universe began with homogeneously distributed matter, enormous structures have since coalesced over billions of years: hundreds of billions of stars inside of galaxies, clusters of galaxies, superclusters, and vast filaments of matter. These denser regions and the voids between them must, under general relativity, have some effect, as matter dictates how space-time curves. So the extra mass of galaxies and galaxy clusters (and dark matter, should particles of it ever be directly detected) must cause nearby space-time to curve more positively, and voids should have the opposite effect, causing space-time around them to take on negative curvatures. The question is whether these effects, called backreactions, are negligible or together comprise enough to change the universe's geometry. Most scientists have assumed that they are negligible, but this has partly been because there has been no way to average space-time geometry in Einstein's equations. [10]
In 2000, a set of new equations—now referred to as the set of Buchert equations—based on general relativity was published by cosmologist Thomas Buchert of the École Normale Supérieure in Lyon, France, which allow the effects of a non-uniform distribution of matter to be taken into account but still allow the behavior of the universe to be averaged. Thus, models based on a lumpy, inhomogeneous distribution of matter could now be devised. [3] "There is no dark energy, as far as I'm concerned," Buchert told New Scientist in 2016. "In ten years' time, dark energy is gone." In the same article, cosmologist Syksy Räsänen said, "It’s not been established beyond reasonable doubt that dark energy exists. But I’d never say that it has been established that dark energy does not exist." He also told the magazine that the question of whether backreactions are negligible in cosmology "has not been satisfactorily answered." [10]
Inhomogeneous cosmology in the most general sense (assuming a totally inhomogeneous universe) is modeling the universe as a whole with the spacetime which does not possess any spacetime symmetries. Typically considered cosmological spacetimes have either the maximal symmetry, which comprises three translational symmetries and three rotational symmetries (homogeneity and isotropy with respect to every point of spacetime), the translational symmetry only (homogeneous models), or the rotational symmetry only (spherically symmetric models). Models with less symmetries (e.g. axisymmetric) are also considered as symmetric. However, it is common to call spherically symmetric models or non-homogeneous models as inhomogeneous. In inhomogeneous cosmology, the large-scale structure of the universe is modeled by exact solutions of the Einstein field equations (i.e. non-perturbatively), unlike cosmological perturbation theory, which is study of the universe that takes structure formation (galaxies, galaxy clusters, the cosmic web) into account but in a perturbative way. [12]
Inhomogeneous cosmology usually includes the study of structure in the Universe by means of exact solutions of Einstein's field equations (i.e. metrics ) [12] or by spatial or spacetime averaging methods. [13] Such models are not homogeneous, [14] but may allow effects which can be interpreted as dark energy, or can lead to cosmological structures such as voids or galaxy clusters. [12] [13]
Perturbation theory, which deals with small perturbations from e.g. a homogeneous metric, only holds as long as the perturbations are not too large, and N-body simulations use Newtonian gravity which is only a good approximation when speeds are low and gravitational fields are weak.
Work towards a non-perturbative approach includes the Relativistic Zel'dovich Approximation. [15] As of 2016 [update] , Thomas Buchert, George Ellis, Edward Kolb, and their colleagues [16] judged that if the universe is described by cosmic variables in a backreaction scheme that includes coarse-graining and averaging, then whether dark energy is an artifact of the traditional way of using the Einstein equation remains an unanswered question. [17]
The first historical examples of inhomogeneous (though spherically symmetric) solutions are the Lemaître–Tolman metric (or LTB model - Lemaître–Tolman-Bondi [18] [19] [20] ). The Stephani metric can be spherically symmetric or totally inhomogeneous. [21] [22] [23] Other examples are the Szekeres metric, Szafron metric, Barnes metric, Kustaanheimo-Qvist metric, and Senovilla metric. [12] The Bianchi metrics as given in the Bianchi classification and Kantowski-Sachs metrics are homogeneous.
The simplest averaging approach is the scalar averaging approach [24] , leading to the kinematical backreaction and mean 3-Ricci curvature functionals. Buchert's equations are the most commonly used equations of such averaging methods. [13] The simplest averaging kernels include spheres (cylinders, when viewed with a time component), Gaussians, and hard-momentum cutoffs. The former work well for non-relativistic fluids (dust); the later are more convenient for relativistic fluid calculations (photons and pre-recombination universes).
In 2007, David L Wiltshire, a professor of theoretical physics at the University of Canterbury in New Zealand, argued in the New Journal of Physics that quasilocal variations in gravitational energy had in 1998 given the false conclusion that the expansion of the universe is accelerating. [8] Moreover, due to the equivalence principle, which holds that gravitational and inertial energy are equivalent and thus prevents aspects of gravitational energy from being differentiated at a local level, scientists thus misidentified these aspects as dark energy. [8] This misidentification was the result of presuming an essentially homogeneous universe, as the standard cosmological model does, and not accounting for temporal differences between matter-dense areas and voids. Wiltshire and others argued that if the universe is not only assumed not to be homogeneous but also not flat, models could be devised in which the apparent acceleration of the universe's expansion could be explained otherwise. [3]
One more important step being left out of the standard model, Wiltshire claimed, was the fact that as proven by observation, gravity slows time. Thus, from the perspective of the same observer, a clock will move faster in empty space, which possesses low gravitation, than inside a galaxy, which has much more gravity, and he argued that as large as a 38% difference between the time on clocks in the Milky Way galaxy and those floating in a void exists. Thus, unless we can correct for that—timescapes each with different times—our observations of the expansion of space will be, and are, incorrect. Wiltshire claims that the 1998 supernovae observations that led to the conclusion of an expanding universe and dark energy can instead be explained by Buchert's equations if certain strange aspects of general relativity are taken into account. [3]
A 2024 study examining the Pantheon+ Type Ia Supernova dataset conducted a significant test of the Timescape cosmology. By employing a model-independent statistical approach, the researchers found that the Timescape model could account for the observed cosmic acceleration without the need for dark energy. This result suggested that inhomogeneous cosmological models may offer viable alternatives to the standard ΛCDM framework and warranted further exploration to assess their ability to explain other key cosmological phenomena. [25]
The arguments of Wiltshire have been contested by Ethan Siegel. [26]
The Big Bang is a physical theory that describes how the universe expanded from an initial state of high density and temperature. The notion of an expanding universe was first scientifically originated by physicist Alexander Friedmann in 1922 with the mathematical derivation of the Friedmann equations. The earliest empirical observation of the notion of an expanding universe is known as Hubble's law, published in work by physicist Edwin Hubble in 1929, which discerned that galaxies are moving away from Earth at a rate that accelerates proportionally with distance. Independent of Friedmann's work, and independent of Hubble's observations, physicist Georges Lemaître proposed that the universe emerged from a "primeval atom" in 1931, introducing the modern notion of the Big Bang.
Physical cosmology is a branch of cosmology concerned with the study of cosmological models. A cosmological model, or simply cosmology, provides a description of the largest-scale structures and dynamics of the universe and allows study of fundamental questions about its origin, structure, evolution, and ultimate fate. Cosmology as a science originated with the Copernican principle, which implies that celestial bodies obey identical physical laws to those on Earth, and Newtonian mechanics, which first allowed those physical laws to be understood.
General relativity, also known as the general theory of relativity, and as Einstein's theory of gravity, is the geometric theory of gravitation published by Albert Einstein in 1915 and is the current description of gravitation in modern physics. General relativity generalizes special relativity and refines Newton's law of universal gravitation, providing a unified description of gravity as a geometric property of space and time, or four-dimensional spacetime. In particular, the curvature of spacetime is directly related to the energy and momentum of whatever present matter and radiation. The relation is specified by the Einstein field equations, a system of second-order partial differential equations.
In cosmology, the cosmological constant, alternatively called Einstein's cosmological constant, is a coefficient that Albert Einstein initially added to his field equations of general relativity. He later removed it; however, much later it was revived to express the energy density of space, or vacuum energy, that arises in quantum mechanics. It is closely associated with the concept of dark energy.
Observations show that the expansion of the universe is accelerating, such that the velocity at which a distant galaxy recedes from the observer is continuously increasing with time. The accelerated expansion of the universe was discovered in 1998 by two independent projects, the Supernova Cosmology Project and the High-Z Supernova Search Team, which used distant type Ia supernovae to measure the acceleration. The idea was that as type Ia supernovae have almost the same intrinsic brightness, and since objects that are further away appear dimmer, the observed brightness of these supernovae can be used to measure the distance to them. The distance can then be compared to the supernovae's cosmological redshift, which measures how much the universe has expanded since the supernova occurred; the Hubble law established that the further away an object is, the faster it is receding. The unexpected result was that objects in the universe are moving away from one another at an accelerating rate. Cosmologists at the time expected that recession velocity would always be decelerating, due to the gravitational attraction of the matter in the universe. Three members of these two groups have subsequently been awarded Nobel Prizes for their discovery. Confirmatory evidence has been found in baryon acoustic oscillations, and in analyses of the clustering of galaxies.
Hubble's law, also known as the Hubble–Lemaître law, is the observation in physical cosmology that galaxies are moving away from Earth at speeds proportional to their distance. In other words, the farther a galaxy is from the Earth, the faster it moves away. A galaxy's recessional velocity is typically determined by measuring its redshift, a shift in the frequency of light emitted by the galaxy.
In modern physical cosmology, the cosmological principle is the notion that the spatial distribution of matter in the universe is uniformly isotropic and homogeneous when viewed on a large enough scale, since the forces are expected to act equally throughout the universe on a large scale, and should, therefore, produce no observable inequalities in the large-scale structuring over the course of evolution of the matter field that was initially laid down by the Big Bang.
A non-standard cosmology is any physical cosmological model of the universe that was, or still is, proposed as an alternative to the then-current standard model of cosmology. The term non-standard is applied to any theory that does not conform to the scientific consensus. Because the term depends on the prevailing consensus, the meaning of the term changes over time. For example, hot dark matter would not have been considered non-standard in 1990, but would have been in 2010. Conversely, a non-zero cosmological constant resulting in an accelerating universe would have been considered non-standard in 1990, but is part of the standard cosmology in 2010.
The Friedmann–Lemaître–Robertson–Walker metric is a metric based on an exact solution of the Einstein field equations of general relativity. The metric describes a homogeneous, isotropic, expanding universe that is path-connected, but not necessarily simply connected. The general form of the metric follows from the geometric properties of homogeneity and isotropy; Einstein's field equations are only needed to derive the scale factor of the universe as a function of time. Depending on geographical or historical preferences, the set of the four scientists – Alexander Friedmann, Georges Lemaître, Howard P. Robertson and Arthur Geoffrey Walker – are variously grouped as Friedmann, Friedmann–Robertson–Walker (FRW), Robertson–Walker (RW), or Friedmann–Lemaître (FL). This model is sometimes called the Standard Model of modern cosmology, although such a description is also associated with the further developed Lambda-CDM model. The FLRW model was developed independently by the named authors in the 1920s and 1930s.
The expansion of the universe is parametrized by a dimensionless scale factor. Also known as the cosmic scale factor or sometimes the Robertson–Walker scale factor, this is a key parameter of the Friedmann equations.
The Lambda-CDM, Lambda cold dark matter, or ΛCDM model is a mathematical model of the Big Bang theory with three major components:
The flatness problem is a cosmological fine-tuning problem within the Big Bang model of the universe. Such problems arise from the observation that some of the initial conditions of the universe appear to be fine-tuned to very 'special' values, and that small deviations from these values would have extreme effects on the appearance of the universe at the current time.
The Friedmann equations, also known as the Friedmann–Lemaître (FL) equations, are a set of equations in physical cosmology that govern cosmic expansion in homogeneous and isotropic models of the universe within the context of general relativity. They were first derived by Alexander Friedmann in 1922 from Einstein's field equations of gravitation for the Friedmann–Lemaître–Robertson–Walker metric and a perfect fluid with a given mass density ρ and pressure p. The equations for negative spatial curvature were given by Friedmann in 1924.
In cosmology, a static universe is a cosmological model in which the universe is both spatially and temporally infinite, and space is neither expanding nor contracting. Such a universe does not have so-called spatial curvature; that is to say that it is 'flat' or Euclidean. A static infinite universe was first proposed by English astronomer Thomas Digges (1546–1595).
The Weyl curvature hypothesis, which arises in the application of Albert Einstein's general theory of relativity to physical cosmology, was introduced by the British mathematician and theoretical physicist Roger Penrose in an article in 1979 in an attempt to provide explanations for two of the most fundamental issues in physics. On the one hand, one would like to account for a universe which on its largest observational scales appears remarkably spatially homogeneous and isotropic in its physical properties ; on the other hand, there is the deep question on the origin of the second law of thermodynamics.
The expansion of the universe is the increase in distance between gravitationally unbound parts of the observable universe with time. It is an intrinsic expansion, so it does not mean that the universe expands "into" anything or that space exists "outside" it. To any observer in the universe, it appears that all but the nearest galaxies move away at speeds that are proportional to their distance from the observer, on average. While objects cannot move faster than light, this limitation applies only with respect to local reference frames and does not limit the recession rates of cosmologically distant objects.
The Milne model was a special-relativistic cosmological model of the universe proposed by Edward Arthur Milne in 1935. It is mathematically equivalent to a special case of the FLRW model in the limit of zero energy density and it obeys the cosmological principle. The Milne model is also similar to Rindler space in that both are simple re-parameterizations of flat Minkowski space.
In physical cosmology and astronomy, dark energy is a proposed form of energy that affects the universe on the largest scales. Its primary effect is to drive the accelerating expansion of the universe. Assuming that the lambda-CDM model of cosmology is correct, dark energy dominates the universe, contributing 68% of the total energy in the present-day observable universe while dark matter and ordinary (baryonic) matter contribute 26% and 5%, respectively, and other components such as neutrinos and photons are nearly negligible. Dark energy's density is very low: 7×10−30 g/cm3, much less than the density of ordinary matter or dark matter within galaxies. However, it dominates the universe's mass–energy content because it is uniform across space.
In theoretical physics, back-reaction is often necessary to calculate the self-consistent behaviour of a particle or an object in an external field.
The Einstein–de Sitter universe is a model of the universe proposed by Albert Einstein and Willem de Sitter in 1932. On first learning of Edwin Hubble's discovery of a linear relation between the redshift of the galaxies and their distance, Einstein set the cosmological constant to zero in the Friedmann equations, resulting in a model of the expanding universe known as the Friedmann–Einstein universe. In 1932, Einstein and De Sitter proposed an even simpler cosmic model by assuming a vanishing spatial curvature as well as a vanishing cosmological constant. In modern parlance, the Einstein–de Sitter universe can be described as a cosmological model for a flat matter-only Friedmann–Lemaître–Robertson–Walker metric (FLRW) universe.