Beyond the Standard Model |
---|
Standard Model |
In theoretical physics, the hierarchy problem is the problem concerning the large discrepancy between aspects of the weak force and gravity. [1] There is no scientific consensus on why, for example, the weak force is 1024 times stronger than gravity.
A hierarchy problem [2] occurs when the fundamental value of some physical parameter, such as a coupling constant or a mass, in some Lagrangian is vastly different from its effective value, which is the value that gets measured in an experiment. This happens because the effective value is related to the fundamental value by a prescription known as renormalization, which applies corrections to it.
Typically the renormalized value of parameters are close to their fundamental values, but in some cases, it appears that there has been a delicate cancellation between the fundamental quantity and the quantum corrections. Hierarchy problems are related to fine-tuning problems and problems of naturalness.
Over the past decade many scientists [3] [4] [5] [6] [7] argued that the hierarchy problem is a specific application of Bayesian statistics.
Studying renormalization in hierarchy problems is difficult, because such quantum corrections are usually power-law divergent, which means that the shortest-distance physics are most important. Because we do not know the precise details of the quantum gravity, we cannot even address how this delicate cancellation between two large terms occurs. Therefore, researchers are led to postulate new physical phenomena that resolve hierarchy problems without fine-tuning.
This article needs additional citations for verification .(April 2024) |
Suppose a physics model requires four parameters to produce a very high-quality working model capable of generating predictions regarding some aspect of our physical universe. Suppose we find through experiments that the parameters have values: 1.2, 1.31, 0.9 and a value near 4×1029. One might wonder how such figures arise. But in particular, might be especially curious about a theory where three values are close to one, and the fourth is so different; in other words, the huge disproportion we seem to find between the first three parameters and the fourth. We might also wonder if one force is so much weaker than the others that it needs a factor of 4×1029 to allow it to be related to them in terms of effects, how did our universe come to be so exactly balanced when its forces emerged? In current particle physics, the differences between some parameters are much larger than this, so the question is even more noteworthy.
One answer given by philosophers is the anthropic principle. If the universe came to exist by chance, and perhaps vast numbers of other universes exist or have existed, then life capable of physics experiments only arose in universes that, by chance, had very balanced forces. All of the universes where the forces were not balanced did not develop life capable of asking this question. So if lifeforms like human beings are aware and capable of asking such a question, humans must have arisen in a universe having balanced forces, however rare that might be. [8] [9]
A second possible answer is that there is a deeper understanding of physics that we currently do not possess. There might be parameters that we can derive physical constants from that have less unbalanced values, or there might be a model with fewer parameters.[ citation needed ]
In particle physics, the most important hierarchy problem is the question that asks why the weak force is 1024 times as strong as gravity. [10] Both of these forces involve constants of nature, the Fermi constant for the weak force and the Newtonian constant of gravitation for gravity. Furthermore, if the Standard Model is used to calculate the quantum corrections to Fermi's constant, it appears that Fermi's constant is surprisingly large and is expected to be closer to Newton's constant unless there is a delicate cancellation between the bare value of Fermi's constant and the quantum corrections to it.
More technically, the question is why the Higgs boson is so much lighter than the Planck mass (or the grand unification energy, or a heavy neutrino mass scale): one would expect that the large quantum contributions to the square of the Higgs boson mass would inevitably make the mass huge, comparable to the scale at which new physics appears unless there is an incredible fine-tuning cancellation between the quadratic radiative corrections and the bare mass.
The problem cannot even be formulated in the strict context of the Standard Model, for the Higgs mass cannot be calculated. In a sense, the problem amounts to the worry that a future theory of fundamental particles, in which the Higgs boson mass will be calculable, should not have excessive fine-tunings.
There have been many proposed solutions by many experienced physicists.
Some physicists believe that one may solve the hierarchy problem via supersymmetry. Supersymmetry can explain how a tiny Higgs mass can be protected from quantum corrections. Supersymmetry removes the power-law divergences of the radiative corrections to the Higgs mass and solves the hierarchy problem as long as the supersymmetric particles are light enough to satisfy the Barbieri–Giudice criterion. [11] This still leaves open the mu problem, however. The tenets of supersymmetry are being tested at the LHC, although no evidence has been found so far for supersymmetry.
Each particle that couples to the Higgs field has an associated Yukawa coupling λf. The coupling with the Higgs field for fermions gives an interaction term , with being the Dirac field and the Higgs field. Also, the mass of a fermion is proportional to its Yukawa coupling, meaning that the Higgs boson will couple most to the most massive particle. This means that the most significant corrections to the Higgs mass will originate from the heaviest particles, most prominently the top quark. By applying the Feynman rules, one gets the quantum corrections to the Higgs mass squared from a fermion to be:
The is called the ultraviolet cutoff and is the scale up to which the Standard Model is valid. If we take this scale to be the Planck scale, then we have the quadratically diverging Lagrangian. However, suppose there existed two complex scalars (taken to be spin 0) such that:
Then by the Feynman rules, the correction (from both scalars) is:
(Note that the contribution here is positive. This is because of the spin-statistics theorem, which means that fermions will have a negative contribution and bosons a positive contribution. This fact is exploited.)
This gives a total contribution to the Higgs mass to be zero if we include both the fermionic and bosonic particles. Supersymmetry is an extension of this that creates 'superpartners' for all Standard Model particles. [12]
Without supersymmetry, a solution to the hierarchy problem has been proposed using just the Standard Model. The idea can be traced back to the fact that the term in the Higgs field that produces the uncontrolled quadratic correction upon renormalization is the quadratic one. If the Higgs field had no mass term, then no hierarchy problem arises. But by missing a quadratic term in the Higgs field, one must find a way to recover the breaking of electroweak symmetry through a non-null vacuum expectation value. This can be obtained using the Weinberg–Coleman mechanism with terms in the Higgs potential arising from quantum corrections. Mass obtained in this way is far too small with respect to what is seen in accelerator facilities and so a conformal Standard Model needs more than one Higgs particle. This proposal has been put forward in 2006 by Krzysztof Antoni Meissner and Hermann Nicolai [13] and is currently under scrutiny. But if no further excitation is observed beyond the one seen so far at LHC, this model would have to be abandoned.
No experimental or observational evidence of extra dimensions has been officially reported. Analyses of results from the Large Hadron Collider severely constrain theories with large extra dimensions. [14] However, extra dimensions could explain why the gravity force is so weak, and why the expansion of the universe is faster than expected. [15]
If we live in a 3+1 dimensional world, then we calculate the gravitational force via Gauss's law for gravity:
which is simply Newton's law of gravitation. Note that Newton's constant G can be rewritten in terms of the Planck mass.
If we extend this idea to extra dimensions, then we get:
where is the 3+1+ dimensional Planck mass. However, we are assuming that these extra dimensions are the same size as the normal 3+1 dimensions. Let us say that the extra dimensions are of size n ≪ than normal dimensions. If we let r ≪ n, then we get (2). However, if we let r ≫ n, then we get our usual Newton's law. However, when r ≫ n, the flux in the extra dimensions becomes a constant, because there is no extra room for gravitational flux to flow through. Thus the flux will be proportional to because this is the flux in the extra dimensions. The formula is:
which gives:
Thus the fundamental Planck mass (the extra-dimensional one) could actually be small, meaning that gravity is actually strong, but this must be compensated by the number of the extra dimensions and their size. Physically, this means that gravity is weak because there is a loss of flux to the extra dimensions.
This section is adapted from "Quantum Field Theory in a Nutshell" by A. Zee. [16]
In 1998 Nima Arkani-Hamed, Savas Dimopoulos, and Gia Dvali proposed the ADD model, also known as the model with large extra dimensions, an alternative scenario to explain the weakness of gravity relative to the other forces. [17] [18] This theory requires that the fields of the Standard Model are confined to a four-dimensional membrane, while gravity propagates in several additional spatial dimensions that are large compared to the Planck scale. [19]
In 1998–99 Merab Gogberashvili published on arXiv (and subsequently in peer-reviewed journals) a number of articles where he showed that if the Universe is considered as a thin shell (a mathematical synonym for "brane") expanding in 5-dimensional space then it is possible to obtain one scale for particle theory corresponding to the 5-dimensional cosmological constant and Universe thickness, and thus to solve the hierarchy problem. [20] [21] [22] It was also shown that four-dimensionality of the Universe is the result of stability requirement since the extra component of the Einstein field equations giving the localized solution for matter fields coincides with one of the conditions of stability.
Subsequently, there were proposed the closely related Randall–Sundrum scenarios which offered their solution to the hierarchy problem.
In 2019, a pair of researchers proposed that IR/UV mixing resulting in the breakdown of the effective quantum field theory could resolve the hierarchy problem. [23] In 2021, another group of researchers showed that UV/IR mixing could resolve the hierarchy problem in string theory. [24]
In physical cosmology, current observations in favor of an accelerating universe imply the existence of a tiny, but nonzero cosmological constant. This problem, called the cosmological constant problem, is a hierarchy problem very similar to that of the Higgs boson mass problem, since the cosmological constant is also very sensitive to quantum corrections, but it is complicated by the necessary involvement of general relativity in the problem. Proposed solutions to the cosmological constant problem include modifying and/or extending gravity, [25] [26] [27] adding matter with unvanishing pressure, [28] and UV/IR mixing in the Standard Model and gravity. [29] [30] Some physicists have resorted to anthropic reasoning to solve the cosmological constant problem, [31] but it is disputed whether anthropic reasoning is scientific. [32] [33]
Supersymmetry is a theoretical framework in physics that suggests the existence of a symmetry between particles with integer spin (bosons) and particles with half-integer spin (fermions). It proposes that for every known particle, there exists a partner particle with different spin properties. There have been multiple experiments on supersymmetry that have failed to provide evidence that it exists in nature. If evidence is found, supersymmetry could help explain certain phenomena, such as the nature of dark matter and the hierarchy problem in particle physics.
Technicolor theories are models of physics beyond the Standard Model that address electroweak gauge symmetry breaking, the mechanism through which W and Z bosons acquire masses. Early technicolor theories were modelled on quantum chromodynamics (QCD), the "color" theory of the strong nuclear force, which inspired their name.
The Friedmann–Lemaître–Robertson–Walker metric is a metric based on an exact solution of the Einstein field equations of general relativity. The metric describes a homogeneous, isotropic, expanding universe that is path-connected, but not necessarily simply connected. The general form of the metric follows from the geometric properties of homogeneity and isotropy; Einstein's field equations are only needed to derive the scale factor of the universe as a function of time. Depending on geographical or historical preferences, the set of the four scientists – Alexander Friedmann, Georges Lemaître, Howard P. Robertson and Arthur Geoffrey Walker – are variously grouped as Friedmann, Friedmann–Robertson–Walker (FRW), Robertson–Walker (RW), or Friedmann–Lemaître (FL). This model is sometimes called the Standard Model of modern cosmology, although such a description is also associated with the further developed Lambda-CDM model. The FLRW model was developed independently by the named authors in the 1920s and 1930s.
In theoretical physics, supergravity is a modern field theory that combines the principles of supersymmetry and general relativity; this is in contrast to non-gravitational supersymmetric theories such as the Minimal Supersymmetric Standard Model. Supergravity is the gauge theory of local supersymmetry. Since the supersymmetry (SUSY) generators form together with the Poincaré algebra a superalgebra, called the super-Poincaré algebra, supersymmetry as a gauge theory makes gravity arise in a natural way.
Brane cosmology refers to several theories in particle physics and cosmology related to string theory, superstring theory and M-theory.
In particle physics, the hypothetical dilaton particle is a particle of a scalar field that appears in theories with extra dimensions when the volume of the compactified dimensions varies. It appears as a radion in Kaluza–Klein theory's compactifications of extra dimensions. In Brans–Dicke theory of gravity, Newton's constant is not presumed to be constant but instead 1/G is replaced by a scalar field and the associated particle is the dilaton.
In theoretical physics, the Einstein–Cartan theory, also known as the Einstein–Cartan–Sciama–Kibble theory, is a classical theory of gravitation, one of several alternatives to general relativity. The theory was first proposed by Élie Cartan in 1922.
In theoretical physics, Euclidean quantum gravity is a version of quantum gravity. It seeks to use the Wick rotation to describe the force of gravity according to the principles of quantum mechanics.
In string theory, the string theory landscape is the collection of possible false vacua, together comprising a collective "landscape" of choices of parameters governing compactifications.
In theoretical physics, massive gravity is a theory of gravity that modifies general relativity by endowing the graviton with a nonzero mass. In the classical theory, this means that gravitational waves obey a massive wave equation and hence travel at speeds below the speed of light.
Conformal gravity refers to gravity theories that are invariant under conformal transformations in the Riemannian geometry sense; more accurately, they are invariant under Weyl transformations where is the metric tensor and is a function on spacetime.
In theoretical physics, a scalar–tensor theory is a field theory that includes both a scalar field and a tensor field to represent a certain interaction. For example, the Brans–Dicke theory of gravitation uses both a scalar field and a tensor field to mediate the gravitational interaction.
Scalar–tensor–vector gravity (STVG) is a modified theory of gravity developed by John Moffat, a researcher at the Perimeter Institute for Theoretical Physics in Waterloo, Ontario. The theory is also often referred to by the acronym MOG.
Physics beyond the Standard Model (BSM) refers to the theoretical developments needed to explain the deficiencies of the Standard Model, such as the inability to explain the fundamental parameters of the standard model, the strong CP problem, neutrino oscillations, matter–antimatter asymmetry, and the nature of dark matter and dark energy. Another problem lies within the mathematical framework of the Standard Model itself: the Standard Model is inconsistent with that of general relativity, and one or both theories break down under certain conditions, such as spacetime singularities like the Big Bang and black hole event horizons.
In physics, a quantum vortex represents a quantized flux circulation of some physical quantity. In most cases, quantum vortices are a type of topological defect exhibited in superfluids and superconductors. The existence of quantum vortices was first predicted by Lars Onsager in 1949 in connection with superfluid helium. Onsager reasoned that quantisation of vorticity is a direct consequence of the existence of a superfluid order parameter as a spatially continuous wavefunction. Onsager also pointed out that quantum vortices describe the circulation of superfluid and conjectured that their excitations are responsible for superfluid phase transitions. These ideas of Onsager were further developed by Richard Feynman in 1955 and in 1957 were applied to describe the magnetic phase diagram of type-II superconductors by Alexei Alexeyevich Abrikosov. In 1935 Fritz London published a very closely related work on magnetic flux quantization in superconductors. London's fluxoid can also be viewed as a quantum vortex.
In particle physics and string theory (M-theory), the ADD model, also known as the model with large extra dimensions (LED), is a model framework that attempts to solve the hierarchy problem. The model tries to explain this problem by postulating that our universe, with its four dimensions, exists on a membrane in a higher dimensional space. It is then suggested that the other forces of nature operate within this membrane and its four dimensions, while the hypothetical gravity-bearing particle, the graviton, can propagate across the extra dimensions. This would explain why gravity is very weak compared to the other fundamental forces. The size of the dimensions in ADD is around the order of the TeV scale, which results in it being experimentally probeable by current colliders, unlike many exotic extra dimensional hypotheses that have the relevant size around the Planck scale.
f(R) is a type of modified gravity theory which generalizes Einstein's general relativity. f(R) gravity is actually a family of theories, each one defined by a different function, f, of the Ricci scalar, R. The simplest case is just the function being equal to the scalar; this is general relativity. As a consequence of introducing an arbitrary function, there may be freedom to explain the accelerated expansion and structure formation of the Universe without adding unknown forms of dark energy or dark matter. Some functional forms may be inspired by corrections arising from a quantum theory of gravity. f(R) gravity was first proposed in 1970 by Hans Adolph Buchdahl. It has become an active field of research following work by Starobinsky on cosmic inflation. A wide range of phenomena can be produced from this theory by adopting different functions; however, many functional forms can now be ruled out on observational grounds, or because of pathological theoretical problems.
In general relativity, the Hamilton–Jacobi–Einstein equation (HJEE) or Einstein–Hamilton–Jacobi equation (EHJE) is an equation in the Hamiltonian formulation of geometrodynamics in superspace, cast in the "geometrodynamics era" around the 1960s, by Asher Peres in 1962 and others. It is an attempt to reformulate general relativity in such a way that it resembles quantum theory within a semiclassical approximation, much like the correspondence between quantum mechanics and classical mechanics.
The asymptotic safety approach to quantum gravity provides a nonperturbative notion of renormalization in order to find a consistent and predictive quantum field theory of the gravitational interaction and spacetime geometry. It is based upon a nontrivial fixed point of the corresponding renormalization group (RG) flow such that the running coupling constants approach this fixed point in the ultraviolet (UV) limit. This suffices to avoid divergences in physical observables. Moreover, it has predictive power: Generically an arbitrary starting configuration of coupling constants given at some RG scale does not run into the fixed point for increasing scale, but a subset of configurations might have the desired UV properties. For this reason it is possible that — assuming a particular set of couplings has been measured in an experiment — the requirement of asymptotic safety fixes all remaining couplings in such a way that the UV fixed point is approached.