Quantum field theory |
---|
History |
In particle physics, the history of quantum field theory starts with its creation by Paul Dirac, when he attempted to quantize the electromagnetic field in the late 1920s. Major advances in the theory were made in the 1940s and 1950s, leading to the introduction of renormalized quantum electrodynamics (QED). The field theory behind QED was so accurate and successful in predictions that efforts were made to apply the same basic concepts for the other forces of nature. Beginning in 1954, the parallel was found by way of gauge theory, leading by the late 1970s, to quantum field models of strong nuclear force and weak nuclear force, united in the modern Standard Model of particle physics.
Efforts to describe gravity using the same techniques have, to date, failed. The study of quantum field theory is still flourishing, as are applications of its methods to many physical problems. It remains one of the most vital areas of theoretical physics today, providing a common language to several different branches of physics.
Quantum field theory originated in the 1920s from the problem of creating a quantum mechanical theory of the electromagnetic field. In particular, de Broglie in 1924 introduced the idea of a wave description of elementary systems in the following way: "we proceed in this work from the assumption of the existence of a certain periodic phenomenon of a yet to be determined character, which is to be attributed to each and every isolated energy parcel". [1]
In 1925, Werner Heisenberg, Max Born, and Pascual Jordan constructed just such a theory by expressing the field's internal degrees of freedom as an infinite set of harmonic oscillators, and by then utilizing the canonical quantization procedure to these oscillators; their paper was published in 1926. [2] [3] [4] This theory assumed that no electric charges or currents were present and today would be called a free field theory.
The first reasonably complete theory of quantum electrodynamics, which included both the electromagnetic field and electrically charged matter as quantum mechanical objects, was created by Paul Dirac in 1927. [5] This quantum field theory could be used to model important processes such as the emission of a photon by an electron dropping into a quantum state of lower energy, a process in which the number of particles changes—one atom in the initial state becomes an atom plus a photon in the final state. It is now understood that the ability to describe such processes is one of the most important features of quantum field theory.
The final crucial step was Enrico Fermi's theory of β-decay (1934). [6] [7] In it, fermion species nonconservation was shown to follow from second quantization: creation and annihilation of fermions came to the fore and quantum field theory was seen to describe particle decays. (Fermi's breakthrough was somewhat foreshadowed in the abstract studies of Soviet physicists, Viktor Ambartsumian and Dmitri Ivanenko, in particular the Ambarzumian–Ivanenko hypothesis of creation of massive particles (1930). [8] The idea was that not only the quanta of the electromagnetic field, photons, but also other particles might emerge and disappear as a result of their interaction with other particles.)
It was evident from the beginning that a proper quantum treatment of the electromagnetic field had to somehow incorporate Einstein's relativity theory, which had grown out of the study of classical electromagnetism. This need to put together relativity and quantum mechanics was the second major motivation in the development of quantum field theory. Pascual Jordan and Wolfgang Pauli showed in 1928 [9] [10] that quantum fields could be made to behave in the way predicted by special relativity during coordinate transformations (specifically, they showed that the field commutators were Lorentz invariant). A further boost for quantum field theory came with the discovery of the Dirac equation, which was originally formulated and interpreted as a single-particle equation analogous to the Schrödinger equation, but unlike the Schrödinger equation, the Dirac equation satisfies both the Lorentz invariance, that is, the requirements of special relativity, and the rules of quantum mechanics. The Dirac equation accommodated the spin-1/2 value of the electron and accounted for its magnetic moment as well as giving accurate predictions for the spectra of hydrogen.
The attempted interpretation of the Dirac equation as a single-particle equation could not be maintained long, however, and finally it was shown that several of its undesirable properties (such as negative-energy states) could be made sense of by reformulating and reinterpreting the Dirac equation as a true field equation, in this case for the quantized "Dirac field" or the "electron field", with the "negative-energy solutions" pointing to the existence of anti-particles. This work was performed first by Dirac himself with the invention of hole theory in 1930 and by Wendell Furry, Robert Oppenheimer, Vladimir Fock, and others. Erwin Schrödinger, during the same period that he discovered his equation in 1926, [11] also independently found the relativistic generalization of it known as the Klein–Gordon equation but dismissed it since, without spin, it predicted impossible properties for the hydrogen spectrum. (See Oskar Klein and Walter Gordon.) All relativistic wave equations that describe spin-zero particles are said to be of the Klein–Gordon type.
A subtle and careful analysis in 1933 by Niels Bohr and Léon Rosenfeld [12] showed that there is a fundamental limitation on the ability to simultaneously measure the electric and magnetic field strengths that enter into the description of charges in interaction with radiation, imposed by the uncertainty principle, which must apply to all canonically conjugate quantities. This limitation is crucial for the successful formulation and interpretation of a quantum field theory of photons and electrons (quantum electrodynamics), and indeed, any perturbative quantum field theory. The analysis of Bohr and Rosenfeld explains fluctuations in the values of the electromagnetic field that differ from the classically "allowed" values distant from the sources of the field.
Their analysis was crucial to showing that the limitations and physical implications of the uncertainty principle apply to all dynamical systems, whether fields or material particles. Their analysis also convinced most physicists that any notion of returning to a fundamental description of nature based on classical field theory, such as what Einstein aimed at with his numerous and failed attempts at a classical unified field theory, was simply out of the question. Fields had to be quantized.
The third thread in the development of quantum field theory was the need to handle the statistics of many-particle systems consistently and with ease. In 1927, Pascual Jordan tried to extend the canonical quantization of fields to the many-body wave functions of identical particles [13] [14] using a formalism which is known as statistical transformation theory; [15] this procedure is now sometimes called second quantization. [16] [17] Dirac is also credited with the invention, as he introduced the key ideas in a 1927 paper. [18] [19] In 1928, Jordan and Eugene Wigner found that the quantum field describing electrons, or other fermions, had to be expanded using anti-commuting creation and annihilation operators due to the Pauli exclusion principle (see Jordan–Wigner transformation). This thread of development was incorporated into many-body theory and strongly influenced condensed matter physics and nuclear physics.
Despite its early successes quantum field theory was plagued by several serious theoretical difficulties. Basic physical quantities, such as the self-energy of the electron, the energy shift of electron states due to the presence of the electromagnetic field, gave infinite, divergent contributions—a nonsensical result—when computed using the perturbative techniques available in the 1930s and most of the 1940s. The electron self-energy problem was already a serious issue in the classical electromagnetic field theory, where the attempt to attribute to the electron a finite size or extent (the classical electron-radius) led immediately to the question of what non-electromagnetic stresses would need to be invoked, which would presumably hold the electron together against the Coulomb repulsion of its finite-sized "parts". The situation was dire, and had certain features that reminded many of the "Rayleigh–Jeans catastrophe". What made the situation in the 1940s so desperate and gloomy, however, was the fact that the correct ingredients (the second-quantized Maxwell–Dirac field equations) for the theoretical description of interacting photons and electrons were well in place, and no major conceptual change was needed analogous to that which was necessitated by a finite and physically sensible account of the radiative behavior of hot objects, as provided by the Planck radiation law.
Improvements in microwave technology made it possible to take more precise measurements of the shift of the levels of a hydrogen atom, [20] now known as the Lamb shift and magnetic moment of the electron. [21] These experiments exposed discrepancies which the theory was unable to explain.
A first indication of a possible way out was given by Hans Bethe in 1947, [22] after attending the Shelter Island Conference. [23] While he was traveling by train from the conference to Schenectady he made the first non-relativistic computation of the shift of the lines of the hydrogen atom as measured by Lamb and Retherford. [22] Despite the limitations of the computation, agreement was excellent. The idea was simply to attach infinities to corrections of mass and charge that were actually fixed to a finite value by experiments. In this way, the infinities get absorbed in those constants and yield a finite result in good agreement with experiments. This procedure was named renormalization.
This "divergence problem" was solved in the case of quantum electrodynamics through the procedure known as renormalization in 1947–49 byHans Kramers, [24] Hans Bethe, [25] Julian Schwinger, [26] [27] [28] [29] Richard Feynman, [30] [31] [32] and Shin'ichiro Tomonaga; [33] [34] [35] [36] [37] [38] [39] the procedure was systematized by Freeman Dyson in 1949. [40] Great progress was made after realizing that all infinities in quantum electrodynamics are related to two effects: the self-energy of the electron/positron, and vacuum polarization.
Renormalization requires paying very careful attention to just what is meant by, for example, the very concepts "charge" and "mass" as they occur in the pure, non-interacting field-equations. The "vacuum" is itself polarizable and, hence, populated by virtual particle (on shell and off shell) pairs, and, hence, is a seething and busy dynamical system in its own right. This was a critical step in identifying the source of "infinities" and "divergences". The "bare mass" and the "bare charge" of a particle, the values that appear in the free-field equations (non-interacting case), are abstractions that are simply not realized in experiment (in interaction). What we measure, and hence, what we must take account of with our equations, and what the solutions must account for, are the "renormalized mass" and the "renormalized charge" of a particle. That is to say, the "shifted" or "dressed" values these quantities must have when due systematic care is taken to include all deviations from their "bare values" is dictated by the very nature of quantum fields themselves.
The first approach that bore fruit is known as the "interaction representation" (see the article Interaction picture), a Lorentz-covariant and gauge-invariant generalization of time-dependent perturbation theory used in ordinary quantum mechanics, and developed by Tomonaga and Schwinger, generalizing earlier efforts of Dirac, Fock and Boris Podolsky. Tomonaga and Schwinger invented a relativistically covariant scheme for representing field commutators and field operators intermediate between the two main representations of a quantum system, the Schrödinger and the Heisenberg representations. Within this scheme, field commutators at separated points can be evaluated in terms of "bare" field creation and annihilation operators. This allows for keeping track of the time-evolution of both the "bare" and "renormalized", or perturbed, values of the Hamiltonian and expresses everything in terms of the coupled, gauge invariant "bare" field-equations. Schwinger gave the most elegant formulation of this approach. The next development was due to Richard Feynman, with his rules for assigning a graph to the terms in the scattering matrix (see S-matrix and Feynman diagrams). These directly corresponded (through the Schwinger–Dyson equation) to the measurable physical processes (cross sections, probability amplitudes, decay widths and lifetimes of excited states) one needs to be able to calculate. This revolutionized how quantum field theory calculations are carried out in practice.
Two classic text-books from the 1960s, James D. Bjorken, Sidney David Drell, Relativistic Quantum Mechanics (1964) and J. J. Sakurai, Advanced Quantum Mechanics (1967), thoroughly developed the Feynman graph expansion techniques using physically intuitive and practical methods following from the correspondence principle, without worrying about the technicalities involved in deriving the Feynman rules from the superstructure of quantum field theory itself. Although both Feynman's heuristic and pictorial style of dealing with the infinities, as well as the formal methods of Tomonaga and Schwinger, worked extremely well, and gave spectacularly accurate answers, the true analytical nature of the question of "renormalizability", that is, whether ANY theory formulated as a "quantum field theory" would give finite answers, was not worked-out until much later, when the urgency of trying to formulate finite theories for the strong and electro-weak (and gravitational) interactions demanded its solution.
Renormalization in the case of QED was largely fortuitous due to the smallness of the coupling constant, the fact that the coupling has no dimensions involving mass, the so-called fine-structure constant, and also the zero-mass of the gauge boson involved, the photon, rendered the small-distance/high-energy behavior of QED manageable. Also, electromagnetic processes are very "clean" in the sense that they are not badly suppressed/damped and/or hidden by the other gauge interactions. By 1965 James D. Bjorken and Sidney David Drell observed: "Quantum electrodynamics (QED) has achieved a status of peaceful coexistence with its divergences ...". [41]
The unification of the electromagnetic force with the weak force encountered initial difficulties due to the lack of accelerator energies high enough to reveal processes beyond the Fermi interaction range. Additionally, a satisfactory theoretical understanding of hadron substructure had to be developed, culminating in the quark model.
Thanks to the somewhat brute-force, ad hoc and heuristic early methods of Feynman, and the abstract methods of Tomonaga and Schwinger, elegantly synthesized by Freeman Dyson, from the period of early renormalization, the modern theory of quantum electrodynamics (QED) has established itself. It is still the most accurate physical theory known, the prototype of a successful quantum field theory. Quantum electrodynamics is an example of what is known as an abelian gauge theory. It relies on the symmetry group U(1) and has one massless gauge field, the U(1) gauge symmetry, dictating the form of the interactions involving the electromagnetic field, with the photon being the gauge boson.
Beginning in the 1950s with the work of Yang and Mills, following the previous lead of Weyl, explored types of symmetries and invariances any field theory must satisfy. QED, and indeed, all field theories, were generalized to a class of quantum field theories known as gauge theories. That symmetries dictate, limit and necessitate the form of interaction between particles is the essence of the "gauge theory revolution". Yang and Mills formulated the first explicit example of a non-abelian gauge theory, Yang–Mills theory, with an attempted explanation of the strong interactions in mind. The strong interactions were then (incorrectly) understood in the mid-1950s, to be mediated by the pi-mesons, the particles predicted by Hideki Yukawa in 1935, [42] based on his profound reflections concerning the reciprocal connection between the mass of any force-mediating particle and the range of the force it mediates. This was allowed by the uncertainty principle. In the absence of dynamical information, Murray Gell-Mann pioneered the extraction of physical predictions from sheer non-abelian symmetry considerations, and introduced non-abelian Lie groups to current algebra and so the gauge theories that came to supersede it.
The 1960s and 1970s saw the formulation of a gauge theory now known as the Standard Model of particle physics, which systematically describes the elementary particles and the interactions between them. The strong interactions are described by quantum chromodynamics (QCD), based on "color" SU(3). The weak interactions require the additional feature of spontaneous symmetry breaking, elucidated by Yoichiro Nambu and the adjunct Higgs mechanism, considered next.
The electroweak interaction part of the Standard Model was formulated by Sheldon Glashow, Abdus Salam, and John Clive Ward in 1959, [43] [44] with their discovery of the SU(2)xU(1) group structure of the theory. In 1967, Steven Weinberg invoked the Higgs mechanism for the generation of the W and Z masses [45] (the intermediate vector bosons responsible for the weak interactions and neutral-currents) and keeping the mass of the photon zero. The Goldstone and Higgs idea for generating mass in gauge theories was sparked in the late 1950s and early 1960s when a number of theoreticians (including Yoichiro Nambu, Steven Weinberg, Jeffrey Goldstone, François Englert, Robert Brout, G. S. Guralnik, C. R. Hagen, Tom Kibble and Philip Warren Anderson) noticed a possibly useful analogy to the (spontaneous) breaking of the U(1) symmetry of electromagnetism in the formation of the BCS ground-state of a superconductor. The gauge boson involved in this situation, the photon, behaves as though it has acquired a finite mass.
There is a further possibility that the physical vacuum (ground-state) does not respect the symmetries implied by the "unbroken" electroweak Lagrangian from which one arrives at the field equations (see the article Electroweak interaction for more details). The electroweak theory of Weinberg and Salam was shown to be renormalizable (finite) and hence consistent by Gerardus 't Hooft and Martinus Veltman. The Glashow–Weinberg–Salam theory (GWS theory), in certain applications, gives an accuracy on a par with quantum electrodynamics.
In the case of the strong interactions, progress concerning their short-distance/high-energy behavior was much slower and more frustrating. For strong interactions with the electro-weak fields, there were difficult issues regarding the strength of coupling, the mass generation of the force carriers as well as their non-linear, self interactions. Although there has been theoretical progress toward a grand unified quantum field theory incorporating the electro-magnetic force, the weak force and the strong force, empirical verification is still pending. Superunification, incorporating the gravitational force, is still very speculative, and is under intensive investigation by many of the best minds in contemporary theoretical physics. Gravitation is a tensor field description of a spin-2 gauge-boson, the "graviton", and is further discussed in the articles on general relativity and quantum gravity.
From the point of view of the techniques of (four-dimensional) quantum field theory, and as the numerous efforts to formulate a consistent quantum gravity theory attests, gravitational quantization has been the reigning champion for bad behavior. [46]
There are technical problems underlain by the fact that the Newtonian constant of gravitation has dimensions involving inverse powers of mass, and, as a simple consequence, it is plagued by perturbatively badly behaved non-linear self-interactions. Gravity is itself a source of gravity, analogously to gauge theories (whose couplings, are, by contrast, dimensionless) leading to uncontrollable divergences at increasing orders of perturbation theory.
Moreover, gravity couples to all energy equally strongly, as per the equivalence principle, so this makes the notion of ever really "switching-off", "cutting-off" or separating, the gravitational interaction from other interactions ambiguous, since, with gravitation, we are dealing with the very structure of space-time itself.
Moreover, it has not been established that a theory of quantum gravity is necessary (see Quantum field theory in curved spacetime).
Parallel breakthroughs in the understanding of phase transitions in condensed matter physics led to novel insights based on the renormalization group. They involved the work of Leo Kadanoff (1966) [47] and Kenneth Geddes Wilson–Michael Fisher (1972) [48] —extending the work of Ernst Stueckelberg–André Petermann (1953) [49] and Murray Gell-Mann–Francis Low (1954) [50] —which led to the seminal reformulation of quantum field theory by Kenneth Geddes Wilson in 1975. [51] This reformulation provided insights into the evolution of effective field theories with scale, which classified all field theories, renormalizable or not. The remarkable conclusion is that, in general, most observables are "irrelevant", i.e., the macroscopic physics is dominated by only a few observables in most systems.
During the same period, Leo Kadanoff (1969) [52] introduced an operator algebra formalism for the two-dimensional Ising model, a widely studied mathematical model of ferromagnetism in statistical physics. This development suggested that quantum field theory describes its scaling limit. Later, there developed the idea that a finite number of generating operators could represent all the correlation functions of the Ising model. The existence of a much stronger symmetry for the scaling limit of two-dimensional critical systems was suggested by Alexander Belavin, Alexander Markovich Polyakov and Alexander Zamolodchikov in 1984, which eventually led to the development of conformal field theory, [53] [54] a special case of quantum field theory, which is presently utilized in different areas of particle physics and condensed matter physics.
The renormalization group spans a set of ideas and methods to monitor changes of the behavior of the theory with scale, providing a deep physical understanding which sparked what has been called the "grand synthesis" of theoretical physics, uniting the quantum field theoretical techniques used in particle physics and condensed matter physics into a single powerful theoretical framework.
The gauge field theory of the strong interactions, quantum chromodynamics, relies crucially on this renormalization group for its distinguishing characteristic features, asymptotic freedom and color confinement.
In theoretical physics, quantum field theory (QFT) is a theoretical framework that combines classical field theory, special relativity, and quantum mechanics. QFT is used in particle physics to construct physical models of subatomic particles and in condensed matter physics to construct models of quasiparticles. The current standard model of particle physics is based on quantum field theory.
In particle physics, quantum electrodynamics (QED) is the relativistic quantum field theory of electrodynamics. In essence, it describes how light and matter interact and is the first theory where full agreement between quantum mechanics and special relativity is achieved. QED mathematically describes all phenomena involving electrically charged particles interacting by means of exchange of photons and represents the quantum counterpart of classical electromagnetism giving a complete account of matter and light interaction.
A timeline of atomic and subatomic physics.
Julian Seymour Schwinger was a Nobel Prize-winning American theoretical physicist. He is best known for his work on quantum electrodynamics (QED), in particular for developing a relativistically invariant perturbation theory, and for renormalizing QED to one loop order. Schwinger was a physics professor at several universities.
Shinichiro Tomonaga, usually cited as Sin-Itiro Tomonaga in English, was a Japanese physicist, influential in the development of quantum electrodynamics, work for which he was jointly awarded the Nobel Prize in Physics in 1965 along with Richard Feynman and Julian Schwinger.
Renormalization is a collection of techniques in quantum field theory, statistical field theory, and the theory of self-similar geometric structures, that are used to treat infinities arising in calculated quantities by altering values of these quantities to compensate for effects of their self-interactions. But even if no infinities arose in loop diagrams in quantum field theory, it could be shown that it would be necessary to renormalize the mass and fields appearing in the original Lagrangian.
In theoretical physics, a chiral anomaly is the anomalous nonconservation of a chiral current. In everyday terms, it is equivalent to a sealed box that contained equal numbers of left and right-handed bolts, but when opened was found to have more left than right, or vice versa.
In quantum field theory, the quantum vacuum state is the quantum state with the lowest possible energy. Generally, it contains no physical particles. The term zero-point field is sometimes used as a synonym for the vacuum state of a quantized field which is completely individual.
In theoretical physics, Pauli–Villars regularization (P–V) is a procedure that isolates divergent terms from finite parts in loop calculations in field theory in order to renormalize the theory. Wolfgang Pauli and Felix Villars published the method in 1949, based on earlier work by Richard Feynman, Ernst Stueckelberg and Dominique Rivier.
In quantum field theory, and specifically quantum electrodynamics, vacuum polarization describes a process in which a background electromagnetic field produces virtual electron–positron pairs that change the distribution of charges and currents that generated the original electromagnetic field. It is also sometimes referred to as the self-energy of the gauge boson (photon).
In the physics of gauge theories, gauge fixing denotes a mathematical procedure for coping with redundant degrees of freedom in field variables. By definition, a gauge theory represents each physically distinct configuration of the system as an equivalence class of detailed local field configurations. Any two detailed configurations in the same equivalence class are related by a certain transformation, equivalent to a shear along unphysical axes in configuration space. Most of the quantitative physical predictions of a gauge theory can only be obtained under a coherent prescription for suppressing or ignoring these unphysical degrees of freedom.
In physics, especially quantum field theory, regularization is a method of modifying observables which have singularities in order to make them finite by the introduction of a suitable parameter called the regulator. The regulator, also known as a "cutoff", models our lack of knowledge about physics at unobserved scales. It compensates for the possibility of separation of scales that "new physics" may be discovered at those scales which the present theory is unable to model, while enabling the current theory to give accurate predictions as an "effective theory" within its intended scale of use.
Quantum mechanics is the study of matter and its interactions with energy on the scale of atomic and subatomic particles. By contrast, classical physics explains matter and energy only on a scale familiar to human experience, including the behavior of astronomical bodies such as the moon. Classical physics is still used in much of modern science and technology. However, towards the end of the 19th century, scientists discovered phenomena in both the large (macro) and the small (micro) worlds that classical physics could not explain. The desire to resolve inconsistencies between observed phenomena and classical theory led to a revolution in physics, a shift in the original scientific paradigm: the development of quantum mechanics.
The Wheeler–Feynman absorber theory, named after its originators, the physicists, Richard Feynman, and John Archibald Wheeler, is a theory of electrodynamics based on a relativistic correct extension of action at a distance electron particles. The theory postulates no independent electromagnetic field. Rather, the whole theory is encapsulated by the Lorentz-invariant action of particle trajectories defined as
In the physics of electromagnetism, the Abraham–Lorentz force is the reaction force on an accelerating charged particle caused by the particle emitting electromagnetic radiation by self-interaction. It is also called the radiation reaction force, the radiation damping force, or the self-force. It is named after the physicists Max Abraham and Hendrik Lorentz.
For classical dynamics at relativistic speeds, see relativistic mechanics.
The light-front quantization of quantum field theories provides a useful alternative to ordinary equal-time quantization. In particular, it can lead to a relativistic description of bound systems in terms of quantum-mechanical wave functions. The quantization is based on the choice of light-front coordinates, where plays the role of time and the corresponding spatial coordinate is . Here, is the ordinary time, is one Cartesian coordinate, and is the speed of light. The other two Cartesian coordinates, and , are untouched and often called transverse or perpendicular, denoted by symbols of the type . The choice of the frame of reference where the time and -axis are defined can be left unspecified in an exactly soluble relativistic theory, but in practical calculations some choices may be more suitable than others.
In physics, relativistic quantum mechanics (RQM) is any Poincaré covariant formulation of quantum mechanics (QM). This theory is applicable to massive particles propagating at all velocities up to those comparable to the speed of light c, and can accommodate massless particles. The theory has application in high energy physics, particle physics and accelerator physics, as well as atomic physics, chemistry and condensed matter physics. Non-relativistic quantum mechanics refers to the mathematical formulation of quantum mechanics applied in the context of Galilean relativity, more specifically quantizing the equations of classical mechanics by replacing dynamical variables by operators. Relativistic quantum mechanics (RQM) is quantum mechanics applied with special relativity. Although the earlier formulations, like the Schrödinger picture and Heisenberg picture were originally formulated in a non-relativistic background, a few of them also work with special relativity.
The light-front quantization of quantum field theories provides a useful alternative to ordinary equal-time quantization. In particular, it can lead to a relativistic description of bound systems in terms of quantum-mechanical wave functions. The quantization is based on the choice of light-front coordinates, where plays the role of time and the corresponding spatial coordinate is . Here, is the ordinary time, is a Cartesian coordinate, and is the speed of light. The other two Cartesian coordinates, and , are untouched and often called transverse or perpendicular, denoted by symbols of the type . The choice of the frame of reference where the time and -axis are defined can be left unspecified in an exactly soluble relativistic theory, but in practical calculations some choices may be more suitable than others. The basic formalism is discussed elsewhere.
Norman Myles Kroll was an American theoretical physicist, known for his pioneering work in QED.