Regularization (physics)

Last updated

In physics, especially quantum field theory, regularization is a method of modifying observables which have singularities in order to make them finite by the introduction of a suitable parameter called the regulator. The regulator, also known as a "cutoff", models our lack of knowledge about physics at unobserved scales (e.g. scales of small size or large energy levels). It compensates for (and requires) the possibility of separation of scales that "new physics" may be discovered at those scales which the present theory is unable to model, while enabling the current theory to give accurate predictions as an "effective theory" within its intended scale of use.

Contents

It is distinct from renormalization, another technique to control infinities without assuming new physics, by adjusting for self-interaction feedback.

Regularization was for many decades controversial even amongst its inventors, as it combines physical and epistemological claims into the same equations. However, it is now well understood and has proven to yield useful, accurate predictions.

Overview

Regularization procedures deal with infinite, divergent, and nonsensical expressions by introducing an auxiliary concept of a regulator (for example, the minimal distance in space which is useful, in case the divergences arise from short-distance physical effects). The correct physical result is obtained in the limit in which the regulator goes away (in our example, ), but the virtue of the regulator is that for its finite value, the result is finite.

However, the result usually includes terms proportional to expressions like which are not well-defined in the limit . Regularization is the first step towards obtaining a completely finite and meaningful result; in quantum field theory it must be usually followed by a related, but independent technique called renormalization. Renormalization is based on the requirement that some physical quantities expressed by seemingly divergent expressions such as are equal to the observed values. Such a constraint allows one to calculate a finite value for many other quantities that looked divergent.

The existence of a limit as ε goes to zero and the independence of the final result from the regulator are nontrivial facts. The underlying reason for them lies in universality as shown by Kenneth Wilson and Leo Kadanoff and the existence of a second order phase transition. Sometimes, taking the limit as ε goes to zero is not possible. This is the case when we have a Landau pole and for nonrenormalizable couplings like the Fermi interaction. However, even for these two examples, if the regulator only gives reasonable results for (where is a superior energy cuttoff) and we are working with scales of the order of , regulators with still give pretty accurate approximations. The physical reason why we can't take the limit of ε going to zero is the existence of new physics below Λ.

It is not always possible to define a regularization such that the limit of ε going to zero is independent of the regularization. In this case, one says that the theory contains an anomaly. Anomalous theories have been studied in great detail and are often founded on the celebrated Atiyah–Singer index theorem or variations thereof (see, for example, the chiral anomaly).

Classical physics example

The problem of infinities first arose in the classical electrodynamics of point particles in the 19th and early 20th century.

The mass of a charged particle should include the mass–energy in its electrostatic field (electromagnetic mass). Assume that the particle is a charged spherical shell of radius re. The mass–energy in the field is

which becomes infinite as re → 0. This implies that the point particle would have infinite inertia, making it unable to be accelerated. Incidentally, the value of re that makes equal to the electron mass is called the classical electron radius, which (setting and restoring factors of c and ) turns out to be

where is the fine-structure constant, and is the Compton wavelength of the electron.

Regularization: Classical physics theory breaks down at small scales, e.g., the difference between an electron and a point particle shown above. Addressing this problem requires new kinds of additional physical constraints. For instance, in this case, assuming a finite electron radius (i.e., regularizing the electron mass-energy) suffices to explain the system below a certain size. Similar regularization arguments work in other renormalization problems. For example, a theory may hold under one narrow set of conditions, but due to calculations involving infinities or singularities, it may breakdown under other conditions or scales. In the case of the electron, another way to avoid infinite mass-energy while retaining the point nature of the particle is to postulate tiny additional dimensions over which the particle could 'spread out' rather than restrict its motion solely over 3D space. This is precisely the motivation behind string theory and other multi-dimensional models including multiple time dimensions. Rather than the existence of unknown new physics, assuming the existence of particle interactions with other surrounding particles in the environment, renormalization offers an alternative strategy to resolve infinities in such classical problems.

Specific types

Specific types of regularization procedures include

Realistic regularization

Conceptual problem

Perturbative predictions by quantum field theory about quantum scattering of elementary particles, implied by a corresponding Lagrangian density, are computed using the Feynman rules, a regularization method to circumvent ultraviolet divergences so as to obtain finite results for Feynman diagrams containing loops, and a renormalization scheme. Regularization method results in regularized n-point Green's functions (propagators), and a suitable limiting procedure (a renormalization scheme) then leads to perturbative S-matrix elements. These are independent of the particular regularization method used, and enable one to model perturbatively the measurable physical processes (cross sections, probability amplitudes, decay widths and lifetimes of excited states). However, so far no known regularized n-point Green's functions can be regarded as being based on a physically realistic theory of quantum-scattering since the derivation of each disregards some of the basic tenets of conventional physics (e.g., by not being Lorentz-invariant, by introducing either unphysical particles with a negative metric or wrong statistics, or discrete space-time, or lowering the dimensionality of space-time, or some combination thereof). So the available regularization methods are understood as formalistic technical devices, devoid of any direct physical meaning. In addition, there are qualms about renormalization. For a history and comments on this more than half-a-century old open conceptual problem, see e.g. [3] [4] [5]

Pauli's conjecture

As it seems that the vertices of non-regularized Feynman series adequately describe interactions in quantum scattering, it is taken that their ultraviolet divergences are due to the asymptotic, high-energy behavior of the Feynman propagators. So it is a prudent, conservative approach to retain the vertices in Feynman series, and modify only the Feynman propagators to create a regularized Feynman series. This is the reasoning behind the formal Pauli–Villars covariant regularization by modification of Feynman propagators through auxiliary unphysical particles, cf. [6] and representation of physical reality by Feynman diagrams.

In 1949 Pauli conjectured there is a realistic regularization, which is implied by a theory that respects all the established principles of contemporary physics. [6] [7] So its propagators (i) do not need to be regularized, and (ii) can be regarded as such a regularization of the propagators used in quantum field theories that might reflect the underlying physics. The additional parameters of such a theory do not need to be removed (i.e. the theory needs no renormalization) and may provide some new information about the physics of quantum scattering, though they may turn out experimentally to be negligible. By contrast, any present regularization method introduces formal coefficients that must eventually be disposed of by renormalization.

Opinions

Paul Dirac was persistently, extremely critical about procedures of renormalization. In 1963, he wrote, "… in the renormalization theory we have a theory that has defied all the attempts of the mathematician to make it sound. I am inclined to suspect that the renormalization theory is something that will not survive in the future,…" [8] He further observed that "One can distinguish between two main procedures for a theoretical physicist. One of them is to work from the experimental basis ... The other procedure is to work from the mathematical basis. One examines and criticizes the existing theory. One tries to pin-point the faults in it and then tries to remove them. The difficulty here is to remove the faults without destroying the very great successes of the existing theory." [9]

Abdus Salam remarked in 1972, "Field-theoretic infinities first encountered in Lorentz's computation of electron have persisted in classical electrodynamics for seventy and in quantum electrodynamics for some thirty-five years. These long years of frustration have left in the subject a curious affection for the infinities and a passionate belief that they are an inevitable part of nature; so much so that even the suggestion of a hope that they may after all be circumvented - and finite values for the renormalization constants computed - is considered irrational." [10] [11]

However, in Gerard ’t Hooft’s opinion, "History tells us that if we hit upon some obstacle, even if it looks like a pure formality or just a technical complication, it should be carefully scrutinized. Nature might be telling us something, and we should find out what it is." [12]

The difficulty with a realistic regularization is that so far there is none, although nothing could be destroyed by its bottom-up approach; and there is no experimental basis for it.

Minimal realistic regularization

Considering distinct theoretical problems, Dirac in 1963 suggested: "I believe separate ideas will be needed to solve these distinct problems and that they will be solved one at a time through successive stages in the future evolution of physics. At this point I find myself in disagreement with most physicists. They are inclined to think one master idea will be discovered that will solve all these problems together. I think it is asking too much to hope that anyone will be able to solve all these problems together. One should separate them one from another as much as possible and try to tackle them separately. And I believe the future development of physics will consist of solving them one at a time, and that after any one of them has been solved there will still be a great mystery about how to attack further ones." [8]

According to Dirac, "Quantum electrodynamics is the domain of physics that we know most about, and presumably it will have to be put in order before we can hope to make any fundamental progress with other field theories, although these will continue to develop on the experimental basis." [9]

Dirac’s two preceding remarks suggest that we should start searching for a realistic regularization in the case of quantum electrodynamics (QED) in the four-dimensional Minkowski spacetime, starting with the original QED Lagrangian density. [8] [9]

The path-integral formulation provides the most direct way from the Lagrangian density to the corresponding Feynman series in its Lorentz-invariant form. [5] The free-field part of the Lagrangian density determines the Feynman propagators, whereas the rest determines the vertices. As the QED vertices are considered to adequately describe interactions in QED scattering, it makes sense to modify only the free-field part of the Lagrangian density so as to obtain such regularized Feynman series that the Lehmann–Symanzik–Zimmermann reduction formula provides a perturbative S-matrix that: (i) is Lorentz-invariant and unitary; (ii) involves only the QED particles; (iii) depends solely on QED parameters and those introduced by the modification of the Feynman propagators—for particular values of these parameters it is equal to the QED perturbative S-matrix; and (iv) exhibits the same symmetries as the QED perturbative S-matrix. Let us refer to such a regularization as the minimal realistic regularization, and start searching for the corresponding, modified free-field parts of the QED Lagrangian density.

Transport theoretic approach

According to Bjorken and Drell, it would make physical sense to sidestep ultraviolet divergences by using more detailed description than can be provided by differential field equations. And Feynman noted about the use of differential equations: "... for neutron diffusion it is only an approximation that is good when the distance over which we are looking is large compared with the mean free path. If we looked more closely, we would see individual neutrons running around." And then he wondered, "Could it be that the real world consists of little X-ons which can be seen only at very tiny distances? And that in our measurements we are always observing on such a large scale that we can’t see these little X-ons, and that is why we get the differential equations? ... Are they [therefore] also correct only as a smoothed-out imitation of a really much more complicated microscopic world?" [13]

Already in 1938, Heisenberg [14] proposed that a quantum field theory can provide only an idealized, large-scale description of quantum dynamics, valid for distances larger than some fundamental length, expected also by Bjorken and Drell in 1965. Feynman's preceding remark provides a possible physical reason for its existence; either that or it is just another way of saying the same thing (there is a fundamental unit of distance) but having no new information.

Hints at new physics

The need for regularization terms in any quantum field theory of quantum gravity is a major motivation for physics beyond the standard model. Infinities of the non-gravitational forces in QFT can be controlled via renormalization only but additional regularization - and hence new physics—is required uniquely for gravity. The regularizers model, and work around, the breakdown of QFT at small scales and thus show clearly the need for some other theory to come into play beyond QFT at these scales. A. Zee (Quantum Field Theory in a Nutshell, 2003) considers this to be a benefit of the regularization framework—theories can work well in their intended domains but also contain information about their own limitations and point clearly to where new physics is needed.

See also

Related Research Articles

<span class="mw-page-title-main">Quantum field theory</span> Theoretical framework

In theoretical physics, quantum field theory (QFT) is a theoretical framework that combines classical field theory, special relativity, and quantum mechanics. QFT is used in particle physics to construct physical models of subatomic particles and in condensed matter physics to construct models of quasiparticles. The current standard model of particle physics is based on quantum field theory.

<span class="mw-page-title-main">Quantum electrodynamics</span> Quantum field theory of electromagnetism

In particle physics, quantum electrodynamics (QED) is the relativistic quantum field theory of electrodynamics. In essence, it describes how light and matter interact and is the first theory where full agreement between quantum mechanics and special relativity is achieved. QED mathematically describes all phenomena involving electrically charged particles interacting by means of exchange of photons and represents the quantum counterpart of classical electromagnetism giving a complete account of matter and light interaction.

<span class="mw-page-title-main">Fermi liquid theory</span> Theoretical model in physics

Fermi liquid theory is a theoretical model of interacting fermions that describes the normal state of the conduction electrons in most metals at sufficiently low temperatures. The theory describes the behavior of many-body systems of particles in which the interactions between particles may be strong. The phenomenological theory of Fermi liquids was introduced by the Soviet physicist Lev Davidovich Landau in 1956, and later developed by Alexei Abrikosov and Isaak Khalatnikov using diagrammatic perturbation theory. The theory explains why some of the properties of an interacting fermion system are very similar to those of the ideal Fermi gas, and why other properties differ.

<span class="mw-page-title-main">Julian Schwinger</span> American theoretical physicist (1918–1994)

Julian Seymour Schwinger was a Nobel Prize-winning American theoretical physicist. He is best known for his work on quantum electrodynamics (QED), in particular for developing a relativistically invariant perturbation theory, and for renormalizing QED to one loop order. Schwinger was a physics professor at several universities.

<span class="mw-page-title-main">Renormalization</span> Method in physics used to deal with infinities

Renormalization is a collection of techniques in quantum field theory, statistical field theory, and the theory of self-similar geometric structures, that are used to treat infinities arising in calculated quantities by altering values of these quantities to compensate for effects of their self-interactions. But even if no infinities arose in loop diagrams in quantum field theory, it could be shown that it would be necessary to renormalize the mass and fields appearing in the original Lagrangian.

In theoretical physics, the term renormalization group (RG) refers to a formal apparatus that allows systematic investigation of the changes of a physical system as viewed at different scales. In particle physics, it reflects the changes in the underlying force laws as the energy scale at which physical processes occur varies, energy/momentum and resolution distance scales being effectively conjugate under the uncertainty principle.

In theoretical physics, a chiral anomaly is the anomalous nonconservation of a chiral current. In everyday terms, it is equivalent to a sealed box that contained equal numbers of left and right-handed bolts, but when opened was found to have more left than right, or vice versa.

<span class="mw-page-title-main">Coupling constant</span> Parameter describing the strength of a force

In physics, a coupling constant or gauge coupling parameter, is a number that determines the strength of the force exerted in an interaction. Originally, the coupling constant related the force acting between two static bodies to the "charges" of the bodies divided by the distance squared, , between the bodies; thus: in for Newtonian gravity and in for electrostatic. This description remains valid in modern physics for linear theories with static bodies and massless force carriers.

<span class="mw-page-title-main">Propagator</span> Function in quantum field theory showing probability amplitudes of moving particles

In quantum mechanics and quantum field theory, the propagator is a function that specifies the probability amplitude for a particle to travel from one place to another in a given period of time, or to travel with a certain energy and momentum. In Feynman diagrams, which serve to calculate the rate of collisions in quantum field theory, virtual particles contribute their propagator to the rate of the scattering event described by the respective diagram. Propagators may also be viewed as the inverse of the wave operator appropriate to the particle, and are, therefore, often called (causal) Green's functions.

In physics, an ultraviolet divergence or UV divergence is a situation in which an integral, for example a Feynman diagram, diverges because of contributions of objects with unbounded energy, or, equivalently, because of physical phenomena at infinitesimal distances.

<span class="mw-page-title-main">Lamb shift</span> Difference in energy of hydrogenic atom electron states not predicted by the Dirac equation

In physics the Lamb shift, named after Willis Lamb, refers to an anomalous difference in energy between two electron orbitals in a hydrogen atom. The difference was not predicted by theory and it cannot be derived from the Dirac equation, which predicts identical energies. Hence the Lamb shift refers to a deviation from theory seen in the differing energies contained by the 2S1/2 and 2P1/2 orbitals of the hydrogen atom.

In theoretical physics, Pauli–Villars regularization (P–V) is a procedure that isolates divergent terms from finite parts in loop calculations in field theory in order to renormalize the theory. Wolfgang Pauli and Felix Villars published the method in 1949, based on earlier work by Richard Feynman, Ernst Stueckelberg and Dominique Rivier.

In quantum field theory, and specifically quantum electrodynamics, vacuum polarization describes a process in which a background electromagnetic field produces virtual electron–positron pairs that change the distribution of charges and currents that generated the original electromagnetic field. It is also sometimes referred to as the self-energy of the gauge boson (photon).

<span class="mw-page-title-main">History of quantum field theory</span>

In particle physics, the history of quantum field theory starts with its creation by Paul Dirac, when he attempted to quantize the electromagnetic field in the late 1920s. Major advances in the theory were made in the 1940s and 1950s, leading to the introduction of renormalized quantum electrodynamics (QED). The field theory behind QED was so accurate and successful in predictions that efforts were made to apply the same basic concepts for the other forces of nature. Beginning in 1954, the parallel was found by way of gauge theory, leading by the late 1970s, to quantum field models of strong nuclear force and weak nuclear force, united in the modern Standard Model of particle physics.

In quantum field theory and statistical mechanics, loop integrals are the integrals which appear when evaluating the Feynman diagrams with one or more loops by integrating over the internal momenta. These integrals are used to determine counterterms, which in turn allow evaluation of the beta function, which encodes the dependence of coupling for an interaction on an energy scale .

An infraparticle is an electrically charged particle together with its surrounding cloud of soft photons—of which there are an infinite number, by virtue of the infrared divergence of quantum electrodynamics. That is, it is a dressed particle rather than a bare particle. Whenever electric charges accelerate they emit Bremsstrahlung radiation, whereby an infinite number of the virtual soft photons become real particles. However, only a finite number of these photons are detectable, the remainder falling below the measurement threshold.

The Wheeler–Feynman absorber theory, named after its originators, the physicists, Richard Feynman, and John Archibald Wheeler, is a theory of electrodynamics based on a relativistic correct extension of action at a distance electron particles. The theory postulates no independent electromagnetic field. Rather, the whole theory is encapsulated by the Lorentz-invariant action of particle trajectories defined as

<span class="mw-page-title-main">Feynman checkerboard</span> Fermion path integral approach in 1+1 dimensions

The Feynman checkerboard, or relativistic chessboard model, was Richard Feynman’s sum-over-paths formulation of the kernel for a free spin-½ particle moving in one spatial dimension. It provides a representation of solutions of the Dirac equation in (1+1)-dimensional spacetime as discrete sums.

<span class="mw-page-title-main">Gauge theory</span> Physical theory with fields invariant under the action of local "gauge" Lie groups

In physics, a gauge theory is a type of field theory in which the Lagrangian, and hence the dynamics of the system itself, do not change under local transformations according to certain smooth families of operations. Formally, the Lagrangian is invariant.

In quantum field theory, and especially in quantum electrodynamics, the interacting theory leads to infinite quantities that have to be absorbed in a renormalization procedure, in order to be able to predict measurable quantities. The renormalization scheme can depend on the type of particles that are being considered. For particles that can travel asymptotically large distances, or for low energy processes, the on-shell scheme, also known as the physical scheme, is appropriate. If these conditions are not fulfilled, one can turn to other schemes, like the minimal subtraction scheme.

References

  1. 't Hooft, G.; Veltman, M. (1972). "Regularization and renormalization of gauge fields" (PDF). Nuclear Physics B. 44 (1): 189–213. Bibcode:1972NuPhB..44..189T. doi:10.1016/0550-3213(72)90279-9. hdl:1874/4845. ISSN   0550-3213.
  2. Scharf, G.: Finite Quantum Electrodynamics: The Causal Approach, Springer 1995.
  3. Cao, Tian Yu; Schweber, Silvan S. (1993). "The conceptual foundations and the philosophical aspects of renormalization theory". Synthese. 97 (1): 33–108. doi:10.1007/bf01255832. ISSN   0039-7857. S2CID   46968305.
  4. L. M.Brown, editor, Renormalization (Springer-Verlag, New York 1993).
  5. 1 2 S. Weinberg (1995). The Quantum Theory of Fields. Vol. 1. Cambridge University Press. Sec. 1.3 and Ch.9.
  6. 1 2 F. Villars (1960). "Regularization and Non-Singular Interactions in Quantum Field Theory". In M. Fierz; V. F. Weiskopf (eds.). Theoretical Physics in the Twentieth Century. New York: Interscience Publishers. pp. 78–106.
  7. Pauli, W.; Villars, F. (1949-07-01). "On the Invariant Regularization in Relativistic Quantum Theory". Reviews of Modern Physics. 21 (3): 434–444. Bibcode:1949RvMP...21..434P. doi: 10.1103/revmodphys.21.434 . ISSN   0034-6861.
  8. 1 2 3 P.A.M. Dirac (May 1963). "The Evolution of the Physicist's Picture of Nature". Scientific American. 208 (5): 45–53. Bibcode:1963SciAm.208e..45D. doi:10.1038/scientificamerican0563-45.
  9. 1 2 3 P.A.M. Dirac (1990) [1968]. "Methods in theoretical physics". In A. Salam (ed.). Unification of Fundamental Forces . Cambridge University Press. pp.  125–143. ISBN   9780521371407.
  10. Isham, C. J.; Salam, Abdus; Strathdee, J. (1971-04-15). "Infinity Suppression in Gravity-Modified Quantum Electrodynamics". Physical Review D. 3 (8): 1805–1817. Bibcode:1971PhRvD...3.1805I. doi:10.1103/physrevd.3.1805. ISSN   0556-2821.
  11. Isham, C. J.; Salam, Abdus; Strathdee, J. (1972-05-15). "Infinity Suppression in Gravity-Modified Electrodynamics. II". Physical Review D. 5 (10): 2548–2565. Bibcode:1972PhRvD...5.2548I. doi:10.1103/physrevd.5.2548. ISSN   0556-2821.
  12. G. ’t Hooft, In Search of the Ultimate Building Blocks (Cambridge University Press, Cambridge 1997).
  13. The Feynman Lectures on Physics. Vol. II, Section 12–7: The “underlying unity” of nature
  14. W. Heisenberg (1938). "Uber die in der Théorie der Elementarteilchen auftretende universelle Lange". Annalen der Physik. 32 (1): 20–33. Bibcode:1938AnP...424...20H. doi:10.1002/andp.19384240105.