Hierarchy problem

Last updated

In theoretical physics, the hierarchy problem is the problem concerning the large discrepancy between aspects of the weak force and gravity. [1] There is no scientific consensus on why, for example, the weak force is 1024 times stronger than gravity.

Contents

Technical definition

A hierarchy problem [2] occurs when the fundamental value of some physical parameter, such as a coupling constant or a mass, in some Lagrangian is vastly different from its effective value, which is the value that gets measured in an experiment. This happens because the effective value is related to the fundamental value by a prescription known as renormalization, which applies corrections to it.

Typically the renormalized value of parameters are close to their fundamental values, but in some cases, it appears that there has been a delicate cancellation between the fundamental quantity and the quantum corrections. Hierarchy problems are related to fine-tuning problems and problems of naturalness.

Over the past decade many scientists [3] [4] [5] [6] [7] argued that the hierarchy problem is a specific application of Bayesian statistics.

Studying renormalization in hierarchy problems is difficult, because such quantum corrections are usually power-law divergent, which means that the shortest-distance physics are most important. Because we do not know the precise details of the shortest-distance theory of physics, we cannot even address how this delicate cancellation between two large terms occurs. Therefore, researchers are led to postulate new physical phenomena that resolve hierarchy problems without fine-tuning.

Overview

Suppose a physics model requires four parameters to produce a very high-quality working model capable of generating predictions regarding some aspect of our physical universe. Suppose we find through experiments that the parameters have values: 1.2, 1.31, 0.9 and 404,331,557,902,116,024,553,602,703,216.58 (roughly 4×1029). Scientists might wonder how such figures arise. But in particular, might be especially curious about a theory where three values are close to one, and the fourth is so different; in other words, the huge disproportion we seem to find between the first three parameters and the fourth. We might also wonder if one force is so much weaker than the others that it needs a factor of 4×1029 to allow it to be related to them in terms of effects, how did our universe come to be so exactly balanced when its forces emerged? In current particle physics, the differences between some parameters are much larger than this, so the question is even more noteworthy.

One answer given by philosophers is the anthropic principle. If the universe came to exist by chance, and perhaps vast numbers of other universes exist or have existed, then life capable of physics experiments only arose in universes that, by chance, had very balanced forces. All of the universes where the forces were not balanced did not develop life capable of asking this question. So if lifeforms like human beings are aware and capable of asking such a question, humans must have arisen in a universe having balanced forces, however rare that might be. [8] [9]

A second possible answer is that there is a deeper understanding of physics that we currently do not possess. There might be parameters that we can derive physical constants from that have less unbalanced values, or there might be a model with fewer parameters.[ citation needed ]

Examples in particle physics

The Higgs mass

In particle physics, the most important hierarchy problem is the question that asks why the weak force is 1024 times as strong as gravity. [10] Both of these forces involve constants of nature, the Fermi constant for the weak force and the Newtonian constant of gravitation for gravity. Furthermore, if the Standard Model is used to calculate the quantum corrections to Fermi's constant, it appears that Fermi's constant is surprisingly large and is expected to be closer to Newton's constant unless there is a delicate cancellation between the bare value of Fermi's constant and the quantum corrections to it.

Cancellation of the Higgs boson quadratic mass renormalization between fermionic top quark loop and scalar stop squark tadpole Feynman diagrams in a supersymmetric extension of the Standard Model Hqmc-vector.svg
Cancellation of the Higgs boson quadratic mass renormalization between fermionic top quark loop and scalar stop squark tadpole Feynman diagrams in a supersymmetric extension of the Standard Model

More technically, the question is why the Higgs boson is so much lighter than the Planck mass (or the grand unification energy, or a heavy neutrino mass scale): one would expect that the large quantum contributions to the square of the Higgs boson mass would inevitably make the mass huge, comparable to the scale at which new physics appears unless there is an incredible fine-tuning cancellation between the quadratic radiative corrections and the bare mass.

The problem cannot even be formulated in the strict context of the Standard Model, for the Higgs mass cannot be calculated. In a sense, the problem amounts to the worry that a future theory of fundamental particles, in which the Higgs boson mass will be calculable, should not have excessive fine-tunings.

Theoretical solutions

There have been many proposed solutions by many experienced physicists.

Supersymmetry

Some physicists believe that one may solve the hierarchy problem via supersymmetry. Supersymmetry can explain how a tiny Higgs mass can be protected from quantum corrections. Supersymmetry removes the power-law divergences of the radiative corrections to the Higgs mass and solves the hierarchy problem as long as the supersymmetric particles are light enough to satisfy the BarbieriGiudice criterion. [11] This still leaves open the mu problem, however. The tenets of supersymmetry are being tested at the LHC, although no evidence has been found so far for supersymmetry.

Each particle that couples to the Higgs field has an associated Yukawa coupling λf. The coupling with the Higgs field for fermions gives an interaction term , with being the Dirac field and the Higgs field. Also, the mass of a fermion is proportional to its Yukawa coupling, meaning that the Higgs boson will couple most to the most massive particle. This means that the most significant corrections to the Higgs mass will originate from the heaviest particles, most prominently the top quark. By applying the Feynman rules, one gets the quantum corrections to the Higgs mass squared from a fermion to be:

The is called the ultraviolet cutoff and is the scale up to which the Standard Model is valid. If we take this scale to be the Planck scale, then we have the quadratically diverging Lagrangian. However, suppose there existed two complex scalars (taken to be spin 0) such that:

(the couplings to the Higgs are exactly the same).

Then by the Feynman rules, the correction (from both scalars) is:

(Note that the contribution here is positive. This is because of the spin-statistics theorem, which means that fermions will have a negative contribution and bosons a positive contribution. This fact is exploited.)

This gives a total contribution to the Higgs mass to be zero if we include both the fermionic and bosonic particles. Supersymmetry is an extension of this that creates 'superpartners' for all Standard Model particles. [12]

Conformal

Without supersymmetry, a solution to the hierarchy problem has been proposed using just the Standard Model. The idea can be traced back to the fact that the term in the Higgs field that produces the uncontrolled quadratic correction upon renormalization is the quadratic one. If the Higgs field had no mass term, then no hierarchy problem arises. But by missing a quadratic term in the Higgs field, one must find a way to recover the breaking of electroweak symmetry through a non-null vacuum expectation value. This can be obtained using the Weinberg–Coleman mechanism with terms in the Higgs potential arising from quantum corrections. Mass obtained in this way is far too small with respect to what is seen in accelerator facilities and so a conformal Standard Model needs more than one Higgs particle. This proposal has been put forward in 2006 by Krzysztof Antoni Meissner and Hermann Nicolai [13] and is currently under scrutiny. But if no further excitation is observed beyond the one seen so far at LHC, this model would have to be abandoned.

Extra dimensions

No experimental or observational evidence of extra dimensions has been officially reported. Analyses of results from the Large Hadron Collider severely constrain theories with large extra dimensions. [14] However, extra dimensions could explain why the gravity force is so weak, and why the expansion of the universe is faster than expected. [15]

If we live in a 3+1 dimensional world, then we calculate the gravitational force via Gauss's law for gravity:

(1)

which is simply Newton's law of gravitation. Note that Newton's constant G can be rewritten in terms of the Planck mass.

If we extend this idea to extra dimensions, then we get:

(2)

where is the 3+1+ dimensional Planck mass. However, we are assuming that these extra dimensions are the same size as the normal 3+1 dimensions. Let us say that the extra dimensions are of size n ≪ than normal dimensions. If we let r'≪n, then we get (2). However, if we let rn, then we get our usual Newton's law. However, when r  n, the flux in the extra dimensions becomes a constant, because there is no extra room for gravitational flux to flow through. Thus the flux will be proportional to because this is the flux in the extra dimensions. The formula is:

which gives:

Thus the fundamental Planck mass (the extra-dimensional one) could actually be small, meaning that gravity is actually strong, but this must be compensated by the number of the extra dimensions and their size. Physically, this means that gravity is weak because there is a loss of flux to the extra dimensions.

This section is adapted from "Quantum Field Theory in a Nutshell" by A. Zee. [16]

Braneworld models

In 1998 Nima Arkani-Hamed, Savas Dimopoulos, and Gia Dvali proposed the ADD model, also known as the model with large extra dimensions, an alternative scenario to explain the weakness of gravity relative to the other forces. [17] [18] This theory requires that the fields of the Standard Model are confined to a four-dimensional membrane, while gravity propagates in several additional spatial dimensions that are large compared to the Planck scale. [19]

In 1998–99 Merab Gogberashvili published on arXiv (and subsequently in peer-reviewed journals) a number of articles where he showed that if the Universe is considered as a thin shell (a mathematical synonym for "brane") expanding in 5-dimensional space then it is possible to obtain one scale for particle theory corresponding to the 5-dimensional cosmological constant and Universe thickness, and thus to solve the hierarchy problem. [20] [21] [22] It was also shown that four-dimensionality of the Universe is the result of stability requirement since the extra component of the Einstein field equations giving the localized solution for matter fields coincides with one of the conditions of stability.

Subsequently, there were proposed the closely related Randall–Sundrum scenarios which offered their solution to the hierarchy problem.

UV/IR mixing

In 2019, a pair of researchers proposed that IR/UV mixing resulting in the breakdown of the effective quantum field theory could resolve the hierarchy problem. [23] In 2021, another group of researchers showed that UV/IR mixing could resolve the hierarchy problem in string theory. [24]

The cosmological constant

In physical cosmology, current observations in favor of an accelerating universe imply the existence of a tiny, but nonzero cosmological constant. This problem, called the cosmological constant problem, is a hierarchy problem very similar to that of the Higgs boson mass problem, since the cosmological constant is also very sensitive to quantum corrections, but it is complicated by the necessary involvement of general relativity in the problem. Proposed solutions to the cosmological constant problem include modifying and/or extending gravity, [25] [26] [27] adding matter with unvanishing pressure, [28] and UV/IR mixing in the Standard Model and gravity. [29] [30] Some physicists have resorted to anthropic reasoning to solve the cosmological constant problem, [31] but it is disputed whether anthropic reasoning is scientific. [32] [33]

See also

Related Research Articles

<span class="mw-page-title-main">Quantum field theory</span> Theoretical framework

In theoretical physics, quantum field theory (QFT) is a theoretical framework that combines classical field theory, special relativity, and quantum mechanics. QFT is used in particle physics to construct physical models of subatomic particles and in condensed matter physics to construct models of quasiparticles. The current standard model of particle physics is based on quantum field theory.

Supersymmetry is a theoretical framework in physics that suggests the existence of a symmetry between particles with integer spin (bosons) and particles with half-integer spin (fermions). It proposes that for every known particle, there exists a partner particle with different spin properties. There have been multiple experiments on supersymmetry that have failed to provide evidence that it exists in nature. If evidence is found, supersymmetry could help explain certain phenomena, such as the nature of dark matter and the hierarchy problem in particle physics.

<span class="mw-page-title-main">Technicolor (physics)</span> Hypothetical model through which W and Z bosons acquire mass

Technicolor theories are models of physics beyond the Standard Model that address electroweak gauge symmetry breaking, the mechanism through which W and Z bosons acquire masses. Early technicolor theories were modelled on quantum chromodynamics (QCD), the "color" theory of the strong nuclear force, which inspired their name.

The Friedmann–Lemaître–Robertson–Walker metric is a metric based on an exact solution of the Einstein field equations of general relativity. The metric describes a homogeneous, isotropic, expanding universe that is path-connected, but not necessarily simply connected. The general form of the metric follows from the geometric properties of homogeneity and isotropy; Einstein's field equations are only needed to derive the scale factor of the universe as a function of time. Depending on geographical or historical preferences, the set of the four scientists – Alexander Friedmann, Georges Lemaître, Howard P. Robertson and Arthur Geoffrey Walker – are variously grouped as Friedmann, Friedmann–Robertson–Walker (FRW), Robertson–Walker (RW), or Friedmann–Lemaître (FL). This model is sometimes called the Standard Model of modern cosmology, although such a description is also associated with the further developed Lambda-CDM model. The FLRW model was developed independently by the named authors in the 1920s and 1930s.

<span class="mw-page-title-main">Supergravity</span> Modern theory of gravitation that combines supersymmetry and general relativity

In theoretical physics, supergravity is a modern field theory that combines the principles of supersymmetry and general relativity; this is in contrast to non-gravitational supersymmetric theories such as the Minimal Supersymmetric Standard Model. Supergravity is the gauge theory of local supersymmetry. Since the supersymmetry (SUSY) generators form together with the Poincaré algebra a superalgebra, called the super-Poincaré algebra, supersymmetry as a gauge theory makes gravity arise in a natural way.

Brane cosmology refers to several theories in particle physics and cosmology related to string theory, superstring theory and M-theory.

In particle physics, the hypothetical dilaton particle is a particle of a scalar field that appears in theories with extra dimensions when the volume of the compactified dimensions varies. It appears as a radion in Kaluza–Klein theory's compactifications of extra dimensions. In Brans–Dicke theory of gravity, Newton's constant is not presumed to be constant but instead 1/G is replaced by a scalar field and the associated particle is the dilaton.

In theoretical physics, the Einstein–Cartan theory, also known as the Einstein–Cartan–Sciama–Kibble theory, is a classical theory of gravitation, one of several alternatives to general relativity. The theory was first proposed by Élie Cartan in 1922.

In theoretical physics, Euclidean quantum gravity is a version of quantum gravity. It seeks to use the Wick rotation to describe the force of gravity according to the principles of quantum mechanics.

In string theory, the string theory landscape is the collection of possible false vacua, together comprising a collective "landscape" of choices of parameters governing compactifications.

In theoretical physics, massive gravity is a theory of gravity that modifies general relativity by endowing the graviton with a nonzero mass. In the classical theory, this means that gravitational waves obey a massive wave equation and hence travel at speeds below the speed of light.

In theoretical physics, a scalar–tensor theory is a field theory that includes both a scalar field and a tensor field to represent a certain interaction. For example, the Brans–Dicke theory of gravitation uses both a scalar field and a tensor field to mediate the gravitational interaction.

<span class="mw-page-title-main">Physics beyond the Standard Model</span> Theories trying to extend known physics

Physics beyond the Standard Model (BSM) refers to the theoretical developments needed to explain the deficiencies of the Standard Model, such as the inability to explain the fundamental parameters of the standard model, the strong CP problem, neutrino oscillations, matter–antimatter asymmetry, and the nature of dark matter and dark energy. Another problem lies within the mathematical framework of the Standard Model itself: the Standard Model is inconsistent with that of general relativity, and one or both theories break down under certain conditions, such as spacetime singularities like the Big Bang and black hole event horizons.

In physics, naturalness is the aesthetic property that the dimensionless ratios between free parameters or physical constants appearing in a physical theory should take values "of order 1" and that free parameters are not fine-tuned. That is, a natural theory would have parameter ratios with values like 2.34 rather than 234000 or 0.000234.

<span class="mw-page-title-main">Quantum vortex</span> Quantized flux circulation of some physical quantity

In physics, a quantum vortex represents a quantized flux circulation of some physical quantity. In most cases, quantum vortices are a type of topological defect exhibited in superfluids and superconductors. The existence of quantum vortices was first predicted by Lars Onsager in 1949 in connection with superfluid helium. Onsager reasoned that quantisation of vorticity is a direct consequence of the existence of a superfluid order parameter as a spatially continuous wavefunction. Onsager also pointed out that quantum vortices describe the circulation of superfluid and conjectured that their excitations are responsible for superfluid phase transitions. These ideas of Onsager were further developed by Richard Feynman in 1955 and in 1957 were applied to describe the magnetic phase diagram of type-II superconductors by Alexei Alexeyevich Abrikosov. In 1935 Fritz London published a very closely related work on magnetic flux quantization in superconductors. London's fluxoid can also be viewed as a quantum vortex.

In particle physics and string theory (M-theory), the ADD model, also known as the model with large extra dimensions (LED), is a model framework that attempts to solve the hierarchy problem. The model tries to explain this problem by postulating that our universe, with its four dimensions, exists on a membrane in a higher dimensional space. It is then suggested that the other forces of nature operate within this membrane and its four dimensions, while the hypothetical gravity-bearing particle, the graviton, can propagate across the extra dimensions. This would explain why gravity is very weak compared to the other fundamental forces. The size of the dimensions in ADD is around the order of the TeV scale, which results in it being experimentally probeable by current colliders, unlike many exotic extra dimensional hypotheses that have the relevant size around the Planck scale.

f(R) is a type of modified gravity theory which generalizes Einstein's general relativity. f(R) gravity is actually a family of theories, each one defined by a different function, f, of the Ricci scalar, R. The simplest case is just the function being equal to the scalar; this is general relativity. As a consequence of introducing an arbitrary function, there may be freedom to explain the accelerated expansion and structure formation of the Universe without adding unknown forms of dark energy or dark matter. Some functional forms may be inspired by corrections arising from a quantum theory of gravity. f(R) gravity was first proposed in 1970 by Hans Adolph Buchdahl. It has become an active field of research following work by Starobinsky on cosmic inflation. A wide range of phenomena can be produced from this theory by adopting different functions; however, many functional forms can now be ruled out on observational grounds, or because of pathological theoretical problems.

In general relativity, the Hamilton–Jacobi–Einstein equation (HJEE) or Einstein–Hamilton–Jacobi equation (EHJE) is an equation in the Hamiltonian formulation of geometrodynamics in superspace, cast in the "geometrodynamics era" around the 1960s, by Asher Peres in 1962 and others. It is an attempt to reformulate general relativity in such a way that it resembles quantum theory within a semiclassical approximation, much like the correspondence between quantum mechanics and classical mechanics.

The asymptotic safety approach to quantum gravity provides a nonperturbative notion of renormalization in order to find a consistent and predictive quantum field theory of the gravitational interaction and spacetime geometry. It is based upon a nontrivial fixed point of the corresponding renormalization group (RG) flow such that the running coupling constants approach this fixed point in the ultraviolet (UV) limit. This suffices to avoid divergences in physical observables. Moreover, it has predictive power: Generically an arbitrary starting configuration of coupling constants given at some RG scale does not run into the fixed point for increasing scale, but a subset of configurations might have the desired UV properties. For this reason it is possible that — assuming a particular set of couplings has been measured in an experiment — the requirement of asymptotic safety fixes all remaining couplings in such a way that the UV fixed point is approached.

References

  1. "The Hierarchy Problem | Of Particular Significance". Profmattstrassler.com. 16 August 2011. Retrieved 13 December 2015.
  2. Arkani–Hamed, Nima; Dimopoulos, Savas; Dvali, Gia (1998-06-18). "The hierarchy problem and new dimensions at a millimeter". Physics Letters B. 429 (3): 263–272. arXiv: hep-ph/9803315 . doi: 10.1016/S0370-2693(98)00466-3 . ISSN   0370-2693.
  3. Fowlie, Andrew; Balazs, Csaba; White, Graham; Marzola, Luca; Raidal, Martti (17 August 2016). "Naturalness of the relaxion mechanism". Journal of High Energy Physics. 2016 (8): 100. arXiv: 1602.03889 . Bibcode:2016JHEP...08..100F. doi:10.1007/JHEP08(2016)100. S2CID   119102534.
  4. Fowlie, Andrew (10 July 2014). "CMSSM, naturalness and the ?fine-tuning price? of the Very Large Hadron Collider". Physical Review D. 90 (1): 015010. arXiv: 1403.3407 . Bibcode:2014PhRvD..90a5010F. doi:10.1103/PhysRevD.90.015010. S2CID   118362634.
  5. Fowlie, Andrew (15 October 2014). "Is the CNMSSM more credible than the CMSSM?". The European Physical Journal C. 74 (10). arXiv: 1407.7534 . doi:10.1140/epjc/s10052-014-3105-y. S2CID   119304794.
  6. Cabrera, Maria Eugenia; Casas, Alberto; Austri, Roberto Ruiz de; Marzola, Luca; Raidal, Martti (2009). "Bayesian approach and naturalness in MSSM analyses for the LHC". Journal of High Energy Physics. 2009 (3): 075. arXiv: 0812.0536 . Bibcode:2009JHEP...03..075C. doi:10.1088/1126-6708/2009/03/075. S2CID   18276270.
  7. Fichet, S. (18 December 2012). "Quantified naturalness from Bayesian statistics". Physical Review D. 86 (12): 125029. arXiv: 1204.4940 . Bibcode:2012PhRvD..86l5029F. doi:10.1103/PhysRevD.86.125029. S2CID   119282331.
  8. "Anthropic principle | Cosmology, Physics & Philosophy | Britannica". www.britannica.com. 2024-02-08. Retrieved 2024-04-01.
  9. Dimopoulos, Savas; Thomas, Scott (2007), Carr, Bernard (ed.), "The anthropic principle, dark energy and the LHC", Universe or Multiverse?, Cambridge: Cambridge University Press (published 5 July 2014), pp. 211–218, ISBN   978-0-521-14069-0 , retrieved 2024-04-01
  10. "Lecture 1: Introduction; Couloumb's law; Superposition; Electric energy" (PDF). Massachusetts Institute of Technology . Retrieved 4 November 2023.
  11. Barbieri, R.; Giudice, G. F. (1988). "Upper Bounds on Supersymmetric Particle Masses". Nucl. Phys. B. 306 (1): 63. Bibcode:1988NuPhB.306...63B. doi:10.1016/0550-3213(88)90171-X.
  12. Martin, Stephen P. (1998). "A Supersymmetry Primer". Perspectives on Supersymmetry. Advanced Series on Directions in High Energy Physics. Vol. 18. pp. 1–98. arXiv: hep-ph/9709356 . doi:10.1142/9789812839657_0001. ISBN   978-981-02-3553-6. S2CID   118973381.
  13. Meissner, K.; Nicolai, H. (2007). "Conformal Symmetry and the Standard Model". Physics Letters . B648 (4): 312–317. arXiv: hep-th/0612165 . Bibcode:2007PhLB..648..312M. doi:10.1016/j.physletb.2007.03.023. S2CID   17973378.
  14. Aad, G.; Abajyan, T.; Abbott, B.; Abdallah, J.; Abdel Khalek, S.; Abdinov, O.; Aben, R.; Abi, B.; Abolins, M.; Abouzeid, O. S.; Abramowicz, H.; Abreu, H.; Abulaiti, Y.; Acharya, B. S.; Adamczyk, L.; Adams, D. L.; Addy, T. N.; Adelman, J.; Adomeit, S.; Adye, T.; Aefsky, S.; Agatonovic-Jovin, T.; Aguilar-Saavedra, J. A.; Agustoni, M.; Ahlen, S. P.; Ahmad, A.; Ahmadov, F.; Aielli, G.; Åkesson, T. P. A.; et al. (2014). "Search for Quantum Black-Hole Production in High-Invariant-Mass Lepton+Jet Final States Using Proton-Proton Collisions at s = 8 TeV and the ATLAS Detector". Physical Review Letters. 112 (9): 091804. arXiv: 1311.2006 . Bibcode:2014PhRvL.112i1804A. doi:10.1103/PhysRevLett.112.091804. PMID   24655244. S2CID   204934578.
  15. "Extra dimensions, gravitons, and tiny black holes". Home.web.cern.ch. 20 January 2012. Retrieved 13 December 2015.
  16. Zee, A. (2003). Quantum field theory in a nutshell. Princeton University Press. Bibcode:2003qftn.book.....Z. ISBN   978-0-691-01019-9.
  17. Arkani-Hamed, N.; Dimopoulos, S.; Dvali, G. (1998). "The Hierarchy problem and new dimensions at a millimeter". Physics Letters . B429 (3–4): 263–272. arXiv: hep-ph/9803315 . Bibcode:1998PhLB..429..263A. doi:10.1016/S0370-2693(98)00466-3. S2CID   15903444.
  18. Arkani-Hamed, N.; Dimopoulos, S.; Dvali, G. (1999). "Phenomenology, astrophysics and cosmology of theories with submillimeter dimensions and TeV scale quantum gravity". Physical Review . D59 (8): 086004. arXiv: hep-ph/9807344 . Bibcode:1999PhRvD..59h6004A. doi:10.1103/PhysRevD.59.086004. S2CID   18385871.
  19. For a pedagogical introduction, see Shifman, M. (2009). Large Extra Dimensions: Becoming acquainted with an alternative paradigm. Crossing the boundaries: Gauge dynamics at strong coupling. International Journal of Modern Physics A. Vol. 25, no. 2n03. Singapore: World Scientific. pp. 199–225. arXiv: 0907.3074 . Bibcode:2010IJMPA..25..199S. doi:10.1142/S0217751X10048548.
  20. Gogberashvili, Merab; Ahluwalia, D. V. (2002). "Hierarchy Problem in the Shell-Universe Model". International Journal of Modern Physics D. 11 (10): 1635–1638. arXiv: hep-ph/9812296 . Bibcode:2002IJMPD..11.1635G. doi:10.1142/S0218271802002992. S2CID   119339225.
  21. Gogberashvili, M. (2000). "Our world as an expanding shell". Europhysics Letters. 49 (3): 396–399. arXiv: hep-ph/9812365 . Bibcode:2000EL.....49..396G. doi:10.1209/epl/i2000-00162-1. S2CID   38476733.
  22. Gogberashvili, Merab (1999). "Four Dimensionality in Non-Compact Kaluza–Klein Model". Modern Physics Letters A. 14 (29): 2025–2031. arXiv: hep-ph/9904383 . Bibcode:1999MPLA...14.2025G. doi:10.1142/S021773239900208X. S2CID   16923959.
  23. Craig, Nathaniel; Koren, Seth (6 March 2020). "IR dynamics from UV divergences: UV/IR mixing, NCFT, and the hierarchy problem". Journal of High Energy Physics. 2020 (37): 37. arXiv: 1909.01365 . Bibcode:2020JHEP...03..037C. doi:10.1007/JHEP03(2020)037. S2CID   202540077.
  24. Abel, Steven; Dienes, Keith R. (29 December 2021). "Calculating the Higgs mass in string theory". Physical Review D. 104 (12): 126032. arXiv: 2106.04622 . Bibcode:2021PhRvD.104l6032A. doi:10.1103/PhysRevD.104.126032. S2CID   235377340.
  25. Bull, Philip, Yashar Akrami, Julian Adamek, Tessa Baker, Emilio Bellini, Jose Beltrán Jiménez, Eloisa Bentivegna et al. "Beyond ΛCDM: Problems, solutions, and the road ahead." Physics of the Dark Universe 12 (2016): 56-99.
  26. Ellis, George F. R. (2014). "The trace-free Einstein equations and inflation". General Relativity and Gravitation . 46: 1619. arXiv: 1306.3021 . Bibcode:2014GReGr..46.1619E. doi:10.1007/s10714-013-1619-5. S2CID   119000135.
  27. Percacci, R. (2018). "Unimodular quantum gravity and the cosmological constant". Foundations of Physics . 48 (10): 1364–1379. arXiv: 1712.09903 . Bibcode:2018FoPh...48.1364P. doi:10.1007/s10701-018-0189-5. S2CID   118934871.
  28. Luongo, Orlando; Muccino, Marco (2018-11-21). "Speeding up the Universe using dust with pressure". Physical Review D. 98 (10): 2–3. arXiv: 1807.00180 . Bibcode:2018PhRvD..98j3520L. doi:10.1103/physrevd.98.103520. ISSN   2470-0010. S2CID   119346601.
  29. Cohen, Andrew; Kaplan, David B.; Nelson, Ann (21 June 1999). "Effective Field Theory, Black Holes, and the Cosmological Constant". Physical Review Letters. 82 (25): 4971–4974. arXiv: hep-th/9803132 . Bibcode:1999PhRvL..82.4971C. doi:10.1103/PhysRevLett.82.4971. S2CID   17203575.
  30. Nikita Blinov; Patrick Draper (7 July 2021). "Densities of States and the CKN Bound". arXiv: 2107.03530 [hep-ph].
  31. Martel, Hugo; Shapiro, Paul R.; Weinberg, Steven (January 1998). "Likely Values of the Cosmological Constant". The Astrophysical Journal. 492 (1): 29–40. arXiv: astro-ph/9701099 . Bibcode:1998ApJ...492...29M. doi:10.1086/305016. S2CID   119064782.
  32. Penrose, R. (1989). The Emperor's New Mind . Oxford University Press. ISBN   978-0-19-851973-7. Chapter 10.
  33. Starkman, G. D.; Trotta, R. (2006). "Why Anthropic Reasoning Cannot Predict Λ". Physical Review Letters. 97 (20): 201301. arXiv: astro-ph/0607227 . Bibcode:2006PhRvL..97t1301S. doi:10.1103/PhysRevLett.97.201301. PMID   17155671. S2CID   27409290. See also this news story.