A variable speed of light (VSL) is a feature of a family of hypotheses stating that the speed of light may in some way not be constant, for example, that it varies in space or time, or depending on frequency. Accepted classical theories of physics, and in particular general relativity, predict a constant speed of light in any local frame of reference and in some situations these predict apparent variations of the speed of light depending on frame of reference, but this article does not refer to this as a variable speed of light. Various alternative theories of gravitation and cosmology, many of them non-mainstream, incorporate variations in the local speed of light.
Attempts to incorporate a variable speed of light into physics were made by Robert Dicke in 1957, and by several researchers starting from the late 1980s.
VSL should not be confused with faster than light theories, its dependence on a medium's refractive index or its measurement in a remote observer's frame of reference in a gravitational potential. In this context, the "speed of light" refers to the limiting speed c of the theory rather than to the velocity of propagation of photons.
Einstein's equivalence principle, on which general relativity is founded, requires that in any local, freely falling reference frame, the speed of light is always the same. [1] [2] This leaves open the possibility, however, that an inertial observer inferring the apparent speed of light in a distant region might calculate a different value. Spatial variation of the speed of light in a gravitational potential as measured against a distant observer's time reference is implicitly present in general relativity. [3] The apparent speed of light will change in a gravity field and, in particular, go to zero at an event horizon as viewed by a distant observer. [4] In deriving the gravitational redshift due to a spherically symmetric massive body, a radial speed of light dr/dt can be defined in Schwarzschild coordinates, with t being the time recorded on a stationary clock at infinity. The result is
where m is MG/c2 and where natural units are used such that c0 is equal to one. [5] [6]
Robert Dicke, in 1957, developed a VSL theory of gravity, a theory in which (unlike general relativity) the speed of light measured locally by a free-falling observer could vary. [7] Dicke assumed that both frequencies and wavelengths could vary, which since resulted in a relative change of c. Dicke assumed a refractive index (eqn. 5) and proved it to be consistent with the observed value for light deflection. In a comment related to Mach's principle, Dicke suggested that, while the right part of the term in eq. 5 is small, the left part, 1, could have "its origin in the remainder of the matter in the universe".
Given that in a universe with an increasing horizon more and more masses contribute to the above refractive index, Dicke considered a cosmology where c decreased in time, providing an alternative explanation to the cosmological redshift. [7] : 374
Variable speed of light models, including Dicke's, have been developed which agree with all known tests of general relativity. [8]
Other models make a link to Dirac's large numbers hypothesis. [9] [ why? ]
Several hypotheses for varying speed of light, seemingly in contradiction to general relativity theory, have been published, including those of Giere and Tan (1986) [10] and Sanejouand (2009). [11] In 2003, Magueijo gave a review of such hypotheses. [12]
Cosmological models with varying speeds of light [13] have been proposed independently by Jean-Pierre Petit in 1988, [14] John Moffat in 1992, [15] and the team of Andreas Albrecht and João Magueijo in 1998 [16] to explain the horizon problem of cosmology and propose an alternative to cosmic inflation.
In 1937, Paul Dirac and others began investigating the consequences of natural constants changing with time. [17] For example, Dirac proposed a change of only 5 parts in 1011 per year of the Newtonian constant of gravitation G to explain the relative weakness of the gravitational force compared to other fundamental forces. This has become known as the Dirac large numbers hypothesis.
However, Richard Feynman showed [18] that the gravitational constant most likely could not have changed this much in the past 4 billion years based on geological and solar system observations, although this may depend on assumptions about G varying in isolation. (See also strong equivalence principle.)
One group, studying distant quasars, has claimed to detect a variation of the fine-structure constant [19] at the level in one part in 105. Other authors dispute these results. Other groups studying quasars claim no detectable variation at much higher sensitivities. [20] [21] [22]
The natural nuclear reactor of Oklo has been used to check whether the atomic fine-structure constant α might have changed over the past 2 billion years. That is because α influences the rate of various nuclear reactions. For example, 149
Sm
captures a neutron to become 150
Sm
, and since the rate of neutron capture depends on the value of α, the ratio of the two samarium isotopes in samples from Oklo can be used to calculate the value of α from 2 billion years ago. Several studies have analysed the relative concentrations of radioactive isotopes left behind at Oklo, and most have concluded that nuclear reactions then were much the same as they are today, which implies α was the same too. [23] [24]
Paul Davies and collaborators have suggested that it is in principle possible to disentangle which of the dimensionful constants (the elementary charge, the Planck constant, and the speed of light) of which the fine-structure constant is composed is responsible for the variation. [25] However, this has been disputed by others and is not generally accepted. [26] [27]
This article or section possibly contains synthesis of material which does not verifiably mention or relate to the main topic.(May 2016) |
To clarify what a variation in a dimensionful quantity actually means, since any such quantity can be changed merely by changing one's choice of units, John Barrow wrote:
Any equation of physical law can be expressed in a form in which all dimensional quantities are normalized against like-dimensioned quantities (called nondimensionalization ), resulting in only dimensionless quantities remaining. Physicists can choose their units so that the physical constants c, G, ħ = h/(2π), 4πε0, and kB take the value one, resulting in every physical quantity being normalized against its corresponding Planck unit. For that, it has been claimed that specifying the evolution of a dimensional quantity is meaningless and does not make sense. [29] When Planck units are used and such equations of physical law are expressed in this nondimensionalized form, no dimensional physical constants such as c, G, ħ, ε0, nor kB remain, only dimensionless quantities, as predicted by the Buckingham π theorem. Short of their anthropometric unit dependence, there is no speed of light, gravitational constant, nor the Planck constant, remaining in mathematical expressions of physical reality to be subject to such hypothetical variation.[ citation needed ] For example, in the case of a hypothetically varying gravitational constant, G, the relevant dimensionless quantities that potentially vary ultimately become the ratios of the Planck mass to the masses of the fundamental particles. Some key dimensionless quantities (thought to be constant) that are related to the speed of light (among other dimensional quantities such as ħ, e, ε0), notably the fine-structure constant or the proton-to-electron mass ratio, could in principle have meaningful variance and their possible variation continues to be studied. [29]
From a very general point of view, G. F. R. Ellis and Jean-Philippe Uzan expressed concerns that a varying c would require a rewrite of much of modern physics to replace the current system which depends on a constant c. [30] [31] Ellis claimed that any varying c theory (1) must redefine distance measurements; (2) must provide an alternative expression for the metric tensor in general relativity; (3) might contradict Lorentz invariance; (4) must modify Maxwell's equations; and (5) must be done consistently with respect to all other physical theories. VSL cosmologies remain out of mainstream physics.
Physical cosmology is a branch of cosmology concerned with the study of cosmological models. A cosmological model, or simply cosmology, provides a description of the largest-scale structures and dynamics of the universe and allows study of fundamental questions about its origin, structure, evolution, and ultimate fate. Cosmology as a science originated with the Copernican principle, which implies that celestial bodies obey identical physical laws to those on Earth, and Newtonian mechanics, which first allowed those physical laws to be understood.
In physical cosmology, cosmic inflation, cosmological inflation, or just inflation, is a theory of exponential expansion of space in the very early universe. Following the inflationary period, the universe continued to expand, but at a slower rate. The re-acceleration of this slowing expansion due to dark energy began after the universe was already over 7.7 billion years old.
General relativity, also known as the general theory of relativity, and as Einstein's theory of gravity, is the geometric theory of gravitation published by Albert Einstein in 1915 and is the current description of gravitation in modern physics. General relativity generalizes special relativity and refines Newton's law of universal gravitation, providing a unified description of gravity as a geometric property of space and time, or four-dimensional spacetime. In particular, the curvature of spacetime is directly related to the energy and momentum of whatever present matter and radiation. The relation is specified by the Einstein field equations, a system of second-order partial differential equations.
In theories of quantum gravity, the graviton is the hypothetical quantum of gravity, an elementary particle that mediates the force of gravitational interaction. There is no complete quantum field theory of gravitons due to an outstanding mathematical problem with renormalization in general relativity. In string theory, believed by some to be a consistent theory of quantum gravity, the graviton is a massless state of a fundamental string.
A physical constant, sometimes fundamental physical constant or universal constant, is a physical quantity that cannot be explained by a theory and therefore must be measured experimentally. It is distinct from a mathematical constant, which has a fixed numerical value, but does not directly involve any physical measurement.
In physics, the fine-structure constant, also known as the Sommerfeld constant, commonly denoted by α, is a fundamental physical constant which quantifies the strength of the electromagnetic interaction between elementary charged particles.
The following is a timeline of gravitational physics and general relativity.
In physics, a dimensionless physical constant is a physical constant that is dimensionless, i.e. a pure number having no units attached and having a numerical value that is independent of whatever system of units may be used.
In particle physics, the hypothetical dilaton particle is a particle of a scalar field that appears in theories with extra dimensions when the volume of the compactified dimensions varies. It appears as a radion in Kaluza–Klein theory's compactifications of extra dimensions. In Brans–Dicke theory of gravity, Newton's constant is not presumed to be constant but instead 1/G is replaced by a scalar field and the associated particle is the dilaton.
In theoretical physics, the Einstein–Cartan theory, also known as the Einstein–Cartan–Sciama–Kibble theory, is a classical theory of gravitation, one of several alternatives to general relativity. The theory was first proposed by Élie Cartan in 1922.
The equivalence principle is the hypothesis that the observed equivalence of gravitational and inertial mass is a consequence of nature. The weak form, known for centuries, relates to masses of any composition in free fall taking the same trajectories and landing at identical times. The extended form by Albert Einstein requires special relativity to also hold in free fall and requires the weak equivalence to be valid everywhere. This form was a critical input for the development of the theory of general relativity. The strong form requires Einstein's form to work for stellar objects. Highly precise experimental tests of the principle limit possible deviations from equivalence to be very small.
The Dirac large numbers hypothesis (LNH) is an observation made by Paul Dirac in 1937 relating ratios of size scales in the Universe to that of force scales. The ratios constitute very large, dimensionless numbers: some 40 orders of magnitude in the present cosmological epoch. According to Dirac's hypothesis, the apparent similarity of these ratios might not be a mere coincidence but instead could imply a cosmology with these unusual features:
Tensor–vector–scalar gravity (TeVeS), developed by Jacob Bekenstein in 2004, is a relativistic generalization of Mordehai Milgrom's Modified Newtonian dynamics (MOND) paradigm.
In theoretical physics, a scalar–tensor theory is a field theory that includes both a scalar field and a tensor field to represent a certain interaction. For example, the Brans–Dicke theory of gravitation uses both a scalar field and a tensor field to mediate the gravitational interaction.
In physical cosmology and astronomy, dark energy is a proposed form of energy that affects the universe on the largest scales. Its primary effect is to drive the accelerating expansion of the universe. Assuming that the lambda-CDM model of cosmology is correct, dark energy dominates the universe, contributing 68% of the total energy in the present-day observable universe while dark matter and ordinary (baryonic) matter contribute 26% and 5%, respectively, and other components such as neutrinos and photons are nearly negligible. Dark energy's density is very low: 7×10−30 g/cm3, much less than the density of ordinary matter or dark matter within galaxies. However, it dominates the universe's mass–energy content because it is uniform across space.
Lorentz invariance follows from two independent postulates: the principle of relativity and the principle of constancy of the speed of light. Dropping the latter while keeping the former leads to a new invariance, known as Fock–Lorentz symmetry or the projective Lorentz transformation. The general study of such theories began with Fock, who was motivated by the search for the general symmetry group preserving relativity without assuming the constancy of c.
Modern searches for Lorentz violation are scientific studies that look for deviations from Lorentz invariance or symmetry, a set of fundamental frameworks that underpin modern science and fundamental physics in particular. These studies try to determine whether violations or exceptions might exist for well-known physical laws such as special relativity and CPT symmetry, as predicted by some variations of quantum gravity, string theory, and some alternatives to general relativity.
In particle physics and physical cosmology, Planck units are a system of units of measurement defined exclusively in terms of four universal physical constants: c, G, ħ, and kB. Expressing one of these physical constants in terms of Planck units yields a numerical value of 1. They are a system of natural units, defined using fundamental properties of nature rather than properties of a chosen prototype object. Originally proposed in 1899 by German physicist Max Planck, they are relevant in research on unified theories such as quantum gravity.
In physics, natural unit systems are measurement systems for which selected physical constants have been set to 1 through nondimensionalization of physical units. For example, the speed of light c may be set to 1, and it may then be omitted, equating mass and energy directly E = m rather than using c as a conversion factor in the typical mass–energy equivalence equation E = mc2. A purely natural system of units has all of its dimensions collapsed, such that the physical constants completely define the system of units and the relevant physical laws contain no conversion constants.
The term physical constant expresses the notion of a physical quantity subject to experimental measurement which is independent of the time or location of the experiment. The constancy (immutability) of any "physical constant" is thus subject to experimental verification.