Renormalization and regularization |
---|
In theoretical physics, the term renormalization group (RG) refers to a formal apparatus that allows systematic investigation of the changes of a physical system as viewed at different scales. In particle physics, it reflects the changes in the underlying force laws (codified in a quantum field theory) as the energy scale at which physical processes occur varies, energy/momentum and resolution distance scales being effectively conjugate under the uncertainty principle.
A change in scale is called a scale transformation. The renormalization group is intimately related to scale invariance and conformal invariance, symmetries in which a system appears the same at all scales (self-similarity). [lower-alpha 1]
As the scale varies, it is as if one is changing the magnifying power of a notional microscope viewing the system. In so-called renormalizable theories, the system at one scale will generally consist of self-similar copies of itself when viewed at a smaller scale, with different parameters describing the components of the system. The components, or fundamental variables, may relate to atoms, elementary particles, atomic spins, etc. The parameters of the theory typically describe the interactions of the components. These may be variable couplings which measure the strength of various forces, or mass parameters themselves. The components themselves may appear to be composed of more of the self-same components as one goes to shorter distances.
For example, in quantum electrodynamics (QED), an electron appears to be composed of electron and positron pairs and photons, as one views it at higher resolution, at very short distances. The electron at such short distances has a slightly different electric charge than does the dressed electron seen at large distances, and this change, or running, in the value of the electric charge is determined by the renormalization group equation.
The idea of scale transformations and scale invariance is old in physics: Scaling arguments were commonplace for the Pythagorean school, Euclid, and up to Galileo. [1] They became popular again at the end of the 19th century, perhaps the first example being the idea of enhanced viscosity of Osborne Reynolds, as a way to explain turbulence.
The renormalization group was initially devised in particle physics, but nowadays its applications extend to solid-state physics, fluid mechanics, physical cosmology, and even nanotechnology. An early article [2] by Ernst Stueckelberg and André Petermann in 1953 anticipates the idea in quantum field theory. Stueckelberg and Petermann opened the field conceptually. They noted that renormalization exhibits a group of transformations which transfer quantities from the bare terms to the counter terms. They introduced a function h(e) in quantum electrodynamics (QED), which is now called the beta function (see below).
Murray Gell-Mann and Francis E. Low restricted the idea to scale transformations in QED in 1954, [3] which are the most physically significant, and focused on asymptotic forms of the photon propagator at high energies. They determined the variation of the electromagnetic coupling in QED, by appreciating the simplicity of the scaling structure of that theory. They thus discovered that the coupling parameter g(μ) at the energy scale μ is effectively given by the (one-dimensional translation) group equation
or equivalently, , for some function G (unspecified—nowadays called Wegner's scaling function) and a constant d, in terms of the coupling g(M) at a reference scale M.
Gell-Mann and Low realized in these results that the effective scale can be arbitrarily taken as μ, and can vary to define the theory at any other scale:
The gist of the RG is this group property: as the scale μ varies, the theory presents a self-similar replica of itself, and any scale can be accessed similarly from any other scale, by group action, a formal transitive conjugacy of couplings [4] in the mathematical sense (Schröder's equation).
On the basis of this (finite) group equation and its scaling property, Gell-Mann and Low could then focus on infinitesimal transformations, and invented a computational method based on a mathematical flow function ψ(g) = Gd/(∂G/∂g) of the coupling parameter g, which they introduced. Like the function h(e) of Stueckelberg and Petermann, their function determines the differential change of the coupling g(μ) with respect to a small change in energy scale μ through a differential equation, the renormalization group equation:
The modern name is also indicated, the beta function, introduced by C. Callan and K. Symanzik in 1970. [5] Since it is a mere function of g, integration in g of a perturbative estimate of it permits specification of the renormalization trajectory of the coupling, that is, its variation with energy, effectively the function G in this perturbative approximation. The renormalization group prediction (cf. Stueckelberg–Petermann and Gell-Mann–Low works) was confirmed 40 years later at the LEP accelerator experiments: the fine structure "constant" of QED was measured [6] to be about 1⁄127 at energies close to 200 GeV, as opposed to the standard low-energy physics value of 1⁄137 . [lower-alpha 2]
The renormalization group emerges from the renormalization of the quantum field variables, which normally has to address the problem of infinities in a quantum field theory. [lower-alpha 3] This problem of systematically handling the infinities of quantum field theory to obtain finite physical quantities was solved for QED by Richard Feynman, Julian Schwinger and Shin'ichirō Tomonaga, who received the 1965 Nobel prize for these contributions. They effectively devised the theory of mass and charge renormalization, in which the infinity in the momentum scale is cut off by an ultra-large regulator, Λ. [lower-alpha 4]
The dependence of physical quantities, such as the electric charge or electron mass, on the scale Λ is hidden, effectively swapped for the longer-distance scales at which the physical quantities are measured, and, as a result, all observable quantities end up being finite instead, even for an infinite Λ. Gell-Mann and Low thus realized in these results that, infinitesimally, while a tiny change in g is provided by the above RG equation given ψ(g), the self-similarity is expressed by the fact that ψ(g) depends explicitly only upon the parameter(s) of the theory, and not upon the scale μ. Consequently, the above renormalization group equation may be solved for (G and thus) g(μ).
A deeper understanding of the physical meaning and generalization of the renormalization process, which goes beyond the dilation group of conventional renormalizable theories, considers methods where widely different scales of lengths appear simultaneously. It came from condensed matter physics: Leo P. Kadanoff's paper in 1966 proposed the "block-spin" renormalization group. [8] The "blocking idea" is a way to define the components of the theory at large distances as aggregates of components at shorter distances.
This approach covered the conceptual point and was given full computational substance in the extensive important contributions of Kenneth Wilson. The power of Wilson's ideas was demonstrated by a constructive iterative renormalization solution of a long-standing problem, the Kondo problem, in 1975, [9] as well as the preceding seminal developments of his new method in the theory of second-order phase transitions and critical phenomena in 1971. [10] [11] [12] He was awarded the Nobel prize for these decisive contributions in 1982. [13]
Meanwhile, the RG in particle physics had been reformulated in more practical terms by Callan and Symanzik in 1970. [5] [14] The above beta function, which describes the "running of the coupling" parameter with scale, was also found to amount to the "canonical trace anomaly", which represents the quantum-mechanical breaking of scale (dilation) symmetry in a field theory. [lower-alpha 5] Applications of the RG to particle physics exploded in number in the 1970s with the establishment of the Standard Model.
In 1973, [15] [16] it was discovered that a theory of interacting colored quarks, called quantum chromodynamics, had a negative beta function. This means that an initial high-energy value of the coupling will eventuate a special value of μ at which the coupling blows up (diverges). This special value is the scale of the strong interactions, μ = ΛQCD and occurs at about 200 MeV. Conversely, the coupling becomes weak at very high energies (asymptotic freedom), and the quarks become observable as point-like particles, in deep inelastic scattering, as anticipated by Feynman–Bjorken scaling. QCD was thereby established as the quantum field theory controlling the strong interactions of particles.
Momentum space RG also became a highly developed tool in solid state physics, but was hindered by the extensive use of perturbation theory, which prevented the theory from succeeding in strongly correlated systems. [lower-alpha 6]
Conformal symmetry is associated with the vanishing of the beta function. This can occur naturally if a coupling constant is attracted, by running, toward a fixed point at which β(g) = 0. In QCD, the fixed point occurs at short distances where g → 0 and is called a (trivial) ultraviolet fixed point. For heavy quarks, such as the top quark, the coupling to the mass-giving Higgs boson runs toward a fixed non-zero (non-trivial) infrared fixed point, first predicted by Pendleton and Ross (1981), [17] and C. T. Hill. [18] The top quark Yukawa coupling lies slightly below the infrared fixed point of the Standard Model suggesting the possibility of additional new physics, such as sequential heavy Higgs bosons.[ citation needed ]
In string theory, conformal invariance of the string world-sheet is a fundamental symmetry: β = 0 is a requirement. Here, β is a function of the geometry of the space-time in which the string moves. This determines the space-time dimensionality of the string theory and enforces Einstein's equations of general relativity on the geometry. The RG is of fundamental importance to string theory and theories of grand unification.
It is also the modern key idea underlying critical phenomena in condensed matter physics. [19] Indeed, the RG has become one of the most important tools of modern physics. [20] It is often used in combination with the Monte Carlo method. [21]
This section introduces pedagogically a picture of RG which may be easiest to grasp: the block spin RG, devised by Leo P. Kadanoff in 1966. [8]
Consider a 2D solid, a set of atoms in a perfect square array, as depicted in the figure.
Assume that atoms interact among themselves only with their nearest neighbours, and that the system is at a given temperature T. The strength of their interaction is quantified by a certain coupling J. The physics of the system will be described by a certain formula, say the Hamiltonian H(T, J).
Now proceed to divide the solid into blocks of 2×2 squares; we attempt to describe the system in terms of block variables, i.e., variables which describe the average behavior of the block. Further assume that, by some lucky coincidence, the physics of block variables is described by a formula of the same kind, but with different values for T and J : H(T′, J′). (This isn't exactly true, in general, but it is often a good first approximation.)
Perhaps, the initial problem was too hard to solve, since there were too many atoms. Now, in the renormalized problem we have only one fourth of them. But why stop now? Another iteration of the same kind leads to H(T",J"), and only one sixteenth of the atoms. We are increasing the observation scale with each RG step.
Of course, the best idea is to iterate until there is only one very big block. Since the number of atoms in any real sample of material is very large, this is more or less equivalent to finding the long range behaviour of the RG transformation which took (T,J) → (T′,J′) and (T′, J′) → (T", J"). Often, when iterated many times, this RG transformation leads to a certain number of fixed points.
To be more concrete, consider a magnetic system (e.g., the Ising model), in which the J coupling denotes the trend of neighbour spins to be aligned. The configuration of the system is the result of the tradeoff between the ordering J term and the disordering effect of temperature.
For many models of this kind there are three fixed points:
So, if we are given a certain material with given values of T and J, all we have to do in order to find out the large-scale behaviour of the system is to iterate the pair until we find the corresponding fixed point.
In more technical terms, let us assume that we have a theory described by a certain function of the state variables and a certain set of coupling constants . This function may be a partition function, an action, a Hamiltonian, etc. It must contain the whole description of the physics of the system.
Now we consider a certain blocking transformation of the state variables , the number of must be lower than the number of . Now let us try to rewrite the function only in terms of the . If this is achievable by a certain change in the parameters, , then the theory is said to be renormalizable.
Most fundamental theories of physics such as quantum electrodynamics, quantum chromodynamics and electro-weak interaction, but not gravity, are exactly renormalizable. Also, most theories in condensed matter physics are approximately renormalizable, from superconductivity to fluid turbulence.
The change in the parameters is implemented by a certain beta function: , which is said to induce a renormalization group flow (or RG flow) on the -space. The values of under the flow are called running couplings.
As was stated in the previous section, the most important information in the RG flow are its fixed points. The possible macroscopic states of the system, at a large scale, are given by this set of fixed points. If these fixed points correspond to a free field theory, the theory is said to exhibit quantum triviality, possessing what is called a Landau pole, as in quantum electrodynamics. For a φ4 interaction, Michael Aizenman proved that this theory is indeed trivial, for space-time dimension D ≥ 5. [22] For D = 4, the triviality has yet to be proven rigorously, but lattice computations have provided strong evidence for this. This fact is important as quantum triviality can be used to bound or even predict parameters such as the Higgs boson mass in asymptotic safety scenarios. Numerous fixed points appear in the study of lattice Higgs theories, but the nature of the quantum field theories associated with these remains an open question. [23]
Since the RG transformations in such systems are lossy (i.e.: the number of variables decreases - see as an example in a different context, Lossy data compression), there need not be an inverse for a given RG transformation. Thus, in such lossy systems, the renormalization group is, in fact, a semigroup, as lossiness implies that there is no unique inverse for each element.
Consider a certain observable A of a physical system undergoing an RG transformation. The magnitude of the observable as the length scale of the system goes from small to large determines the importance of the observable(s) for the scaling law:
If its magnitude ... | then the observable is ... |
always increases | relevant |
always decreases | irrelevant |
other | marginal |
A relevant observable is needed to describe the macroscopic behaviour of the system; irrelevant observables are not needed. Marginal observables may or may not need to be taken into account. A remarkable broad fact is that most observables are irrelevant, i.e., the macroscopic physics is dominated by only a few observables in most systems.
As an example, in microscopic physics, to describe a system consisting of a mole of carbon-12 atoms we need of the order of 1023 (the Avogadro number) variables, while to describe it as a macroscopic system (12 grams of carbon-12) we only need a few.
Before Wilson's RG approach, there was an astonishing empirical fact to explain: The coincidence of the critical exponents (i.e., the exponents of the reduced-temperature dependence of several quantities near a second order phase transition) in very disparate phenomena, such as magnetic systems, superfluid transition (Lambda transition), alloy physics, etc. So in general, thermodynamic features of a system near a phase transition depend only on a small number of variables, such as the dimensionality and symmetry, but are insensitive to details of the underlying microscopic properties of the system.
This coincidence of critical exponents for ostensibly quite different physical systems, called universality, is easily explained using the renormalization group, by demonstrating that the differences in phenomena among the individual fine-scale components are determined by irrelevant observables, while the relevant observables are shared in common. Hence many macroscopic phenomena may be grouped into a small set of universality classes , specified by the shared sets of relevant observables. [lower-alpha 7]
Renormalization groups, in practice, come in two main "flavors". The Kadanoff picture explained above refers mainly to the so-called real-space RG.
Momentum-space RG on the other hand, has a longer history despite its relative subtlety. It can be used for systems where the degrees of freedom can be cast in terms of the Fourier modes of a given field. The RG transformation proceeds by integrating out a certain set of high-momentum (large-wavenumber) modes. Since large wavenumbers are related to short-length scales, the momentum-space RG results in an essentially analogous coarse-graining effect as with real-space RG.
Momentum-space RG is usually performed on a perturbation expansion. The validity of such an expansion is predicated upon the actual physics of a system being close to that of a free field system. In this case, one may calculate observables by summing the leading terms in the expansion. This approach has proved successful for many theories, including most of particle physics, but fails for systems whose physics is very far from any free system, i.e., systems with strong correlations.
As an example of the physical meaning of RG in particle physics, consider an overview of charge renormalization in quantum electrodynamics (QED). Suppose we have a point positive charge of a certain true (or bare) magnitude. The electromagnetic field around it has a certain energy, and thus may produce some virtual electron-positron pairs (for example). Although virtual particles annihilate very quickly, during their short lives the electron will be attracted by the charge, and the positron will be repelled. Since this happens uniformly everywhere near the point charge, where its electric field is sufficiently strong, these pairs effectively create a screen around the charge when viewed from far away. The measured strength of the charge will depend on how close our measuring probe can approach the point charge, bypassing more of the screen of virtual particles the closer it gets. Hence a dependence of a certain coupling constant (here, the electric charge) with distance scale.
Momentum and length scales are related inversely, according to the de Broglie relation: The higher the energy or momentum scale we may reach, the lower the length scale we may probe and resolve. Therefore, the momentum-space RG practitioners sometimes claim to integrate out high momenta or high energy from their theories.
An exact renormalization group equation (ERGE) is one that takes irrelevant couplings into account. There are several formulations.
The Wilson ERGE is the simplest conceptually, but is practically impossible to implement. Fourier transform into momentum space after Wick rotating into Euclidean space. Insist upon a hard momentum cutoff, p2 ≤ Λ2 so that the only degrees of freedom are those with momenta less than Λ. The partition function is
For any positive Λ' less than Λ, define SΛ' (a functional over field configurations φ whose Fourier transform has momentum support within p2 ≤ Λ' 2) as
If SΛ depends only on ϕ and not on derivatives of ϕ, this may be rewritten as
in which it becomes clear that, since only functions ϕ with support between Λ' and Λ are integrated over, the left hand side may still depend on ϕ with support outside that range. Obviously,
In fact, this transformation is transitive. If you compute SΛ′ from SΛ and then compute SΛ′′ from SΛ′′, this gives you the same Wilsonian action as computing SΛ″ directly from SΛ.
The Polchinski ERGE involves a smooth UV regulator cutoff. Basically, the idea is an improvement over the Wilson ERGE. Instead of a sharp momentum cutoff, it uses a smooth cutoff. Essentially, we suppress contributions from momenta greater than Λ heavily. The smoothness of the cutoff, however, allows us to derive a functional differential equation in the cutoff scale Λ. As in Wilson's approach, we have a different action functional for each cutoff energy scale Λ. Each of these actions are supposed to describe exactly the same model which means that their partition functionals have to match exactly.
In other words, (for a real scalar field; generalizations to other fields are obvious),
and ZΛ is really independent of Λ! We have used the condensed deWitt notation here. We have also split the bare action SΛ into a quadratic kinetic part and an interacting part Sint Λ. This split most certainly isn't clean. The "interacting" part can very well also contain quadratic kinetic terms. In fact, if there is any wave function renormalization, it most certainly will. This can be somewhat reduced by introducing field rescalings. RΛ is a function of the momentum p and the second term in the exponent is
when expanded.
When , RΛ(p)/p2 is essentially 1. When , RΛ(p)/p2 becomes very very huge and approaches infinity. RΛ(p)/p2 is always greater than or equal to 1 and is smooth. Basically, this leaves the fluctuations with momenta less than the cutoff Λ unaffected but heavily suppresses contributions from fluctuations with momenta greater than the cutoff. This is obviously a huge improvement over Wilson.
The condition that
can be satisfied by (but not only by)
Jacques Distler claimed without proof that this ERGE is not correct nonperturbatively. [24]
The effective average action ERGE involves a smooth IR regulator cutoff. The idea is to take all fluctuations right up to an IR scale k into account. The effective average action will be accurate for fluctuations with momenta larger than k. As the parameter k is lowered, the effective average action approaches the effective action which includes all quantum and classical fluctuations. In contrast, for large k the effective average action is close to the "bare action". So, the effective average action interpolates between the "bare action" and the effective action.
For a real scalar field, one adds an IR cutoff
to the action S, where Rk is a function of both k and p such that for , Rk(p) is very tiny and approaches 0 and for , . Rk is both smooth and nonnegative. Its large value for small momenta leads to a suppression of their contribution to the partition function which is effectively the same thing as neglecting large-scale fluctuations.
One can use the condensed deWitt notation
for this IR regulator.
So,
where J is the source field. The Legendre transform of Wk ordinarily gives the effective action. However, the action that we started off with is really S[φ]+1/2 φ⋅Rk⋅φ and so, to get the effective average action, we subtract off 1/2 φ⋅Rk⋅φ. In other words,
can be inverted to give Jk[φ] and we define the effective average action Γk as
Hence,
thus
is the ERGE which is also known as the Wetterich equation. As shown by Morris the effective action Γk is in fact simply related to Polchinski's effective action Sint via a Legendre transform relation. [25]
As there are infinitely many choices of Rk, there are also infinitely many different interpolating ERGEs. Generalization to other fields like spinorial fields is straightforward.
Although the Polchinski ERGE and the effective average action ERGE look similar, they are based upon very different philosophies. In the effective average action ERGE, the bare action is left unchanged (and the UV cutoff scale—if there is one—is also left unchanged) but the IR contributions to the effective action are suppressed whereas in the Polchinski ERGE, the QFT is fixed once and for all but the "bare action" is varied at different energy scales to reproduce the prespecified model. Polchinski's version is certainly much closer to Wilson's idea in spirit. Note that one uses "bare actions" whereas the other uses effective (average) actions.
The renormalization group can also be used to compute effective potentials at orders higher than 1-loop. This kind of approach is particularly interesting to compute corrections to the Coleman–Weinberg [26] mechanism. To do so, one must write the renormalization group equation in terms of the effective potential. To the case of the model:
In order to determine the effective potential, it is useful to write as
where is a power series in :
Using the above ansatz, it is possible to solve the renormalization group equation perturbatively and find the effective potential up to desired order. A pedagogical explanation of this technique is shown in reference. [27]
In theoretical physics, quantum field theory (QFT) is a theoretical framework that combines classical field theory, special relativity, and quantum mechanics. QFT is used in particle physics to construct physical models of subatomic particles and in condensed matter physics to construct models of quasiparticles. The current standard model of particle physics is based on quantum field theory.
A conformal field theory (CFT) is a quantum field theory that is invariant under conformal transformations. In two dimensions, there is an infinite-dimensional algebra of local conformal transformations, and conformal field theories can sometimes be exactly solved or classified.
In quantum field theory, Wilson loops are gauge invariant operators arising from the parallel transport of gauge variables around closed loops. They encode all gauge information of the theory, allowing for the construction of loop representations which fully describe gauge theories in terms of these loops. In pure gauge theory they play the role of order operators for confinement, where they satisfy what is known as the area law. Originally formulated by Kenneth G. Wilson in 1974, they were used to construct links and plaquettes which are the fundamental parameters in lattice gauge theory. Wilson loops fall into the broader class of loop operators, with some other notable examples being 't Hooft loops, which are magnetic duals to Wilson loops, and Polyakov loops, which are the thermal version of Wilson loops.
In physics, mathematics and statistics, scale invariance is a feature of objects or laws that do not change if scales of length, energy, or other variables, are multiplied by a common factor, and thus represent a universality.
In quantum mechanics and quantum field theory, the propagator is a function that specifies the probability amplitude for a particle to travel from one place to another in a given period of time, or to travel with a certain energy and momentum. In Feynman diagrams, which serve to calculate the rate of collisions in quantum field theory, virtual particles contribute their propagator to the rate of the scattering event described by the respective diagram. Propagators may also be viewed as the inverse of the wave operator appropriate to the particle, and are, therefore, often called (causal) Green's functions.
The Schwinger–Dyson equations (SDEs) or Dyson–Schwinger equations, named after Julian Schwinger and Freeman Dyson, are general relations between correlation functions in quantum field theories (QFTs). They are also referred to as the Euler–Lagrange equations of quantum field theories, since they are the equations of motion corresponding to the Green's function. They form a set of infinitely many functional differential equations, all coupled to each other, sometimes referred to as the infinite tower of SDEs.
In quantum field theory, the mass gap is the difference in energy between the lowest energy state, the vacuum, and the next lowest energy state. The energy of the vacuum is zero by definition, and assuming that all energy states can be thought of as particles in plane-waves, the mass gap is the mass of the lightest particle.
In quantum field theory, a quartic interaction is a type of self-interaction in a scalar field. Other types of quartic interactions may be found under the topic of four-fermion interactions. A classical free scalar field satisfies the Klein–Gordon equation. If a scalar field is denoted , a quartic interaction is represented by adding a potential energy term to the Lagrangian density. The coupling constant is dimensionless in 4-dimensional spacetime.
In quantum field theory and statistical mechanics, loop integrals are the integrals which appear when evaluating the Feynman diagrams with one or more loops by integrating over the internal momenta. These integrals are used to determine counterterms, which in turn allow evaluation of the beta function, which encodes the dependence of coupling for an interaction on an energy scale .
In theoretical physics, a source field is a background field coupled to the original field as
In theoretical physics, scalar field theory can refer to a relativistically invariant classical or quantum theory of scalar fields. A scalar field is invariant under any Lorentz transformation.
In theoretical physics, the BRST formalism, or BRST quantization denotes a relatively rigorous mathematical approach to quantizing a field theory with a gauge symmetry. Quantization rules in earlier quantum field theory (QFT) frameworks resembled "prescriptions" or "heuristics" more than proofs, especially in non-abelian QFT, where the use of "ghost fields" with superficially bizarre properties is almost unavoidable for technical reasons related to renormalization and anomaly cancellation.
A polymer field theory is a statistical field theory describing the statistical behavior of a neutral or charged polymer system. It can be derived by transforming the partition function from its standard many-dimensional integral representation over the particle degrees of freedom in a functional integral representation over an auxiliary field function, using either the Hubbard–Stratonovich transformation or the delta-functional transformation. Computer simulations based on polymer field theories have been shown to deliver useful results, for example to calculate the structures and properties of polymer solutions, polymer melts and thermoplastics.
f(R) is a type of modified gravity theory which generalizes Einstein's general relativity. f(R) gravity is actually a family of theories, each one defined by a different function, f, of the Ricci scalar, R. The simplest case is just the function being equal to the scalar; this is general relativity. As a consequence of introducing an arbitrary function, there may be freedom to explain the accelerated expansion and structure formation of the Universe without adding unknown forms of dark energy or dark matter. Some functional forms may be inspired by corrections arising from a quantum theory of gravity. f(R) gravity was first proposed in 1970 by Hans Adolph Buchdahl. It has become an active field of research following work by Starobinsky on cosmic inflation. A wide range of phenomena can be produced from this theory by adopting different functions; however, many functional forms can now be ruled out on observational grounds, or because of pathological theoretical problems.
In theoretical physics, functional renormalization group (FRG) is an implementation of the renormalization group (RG) concept which is used in quantum and statistical field theory, especially when dealing with strongly interacting systems. The method combines functional methods of quantum field theory with the intuitive renormalization group idea of Kenneth G. Wilson. This technique allows to interpolate smoothly between the known microscopic laws and the complicated macroscopic phenomena in physical systems. In this sense, it bridges the transition from simplicity of microphysics to complexity of macrophysics. Figuratively speaking, FRG acts as a microscope with a variable resolution. One starts with a high-resolution picture of the known microphysical laws and subsequently decreases the resolution to obtain a coarse-grained picture of macroscopic collective phenomena. The method is nonperturbative, meaning that it does not rely on an expansion in a small coupling constant. Mathematically, FRG is based on an exact functional differential equation for a scale-dependent effective action.
Asymptotic safety is a concept in quantum field theory which aims at finding a consistent and predictive quantum theory of the gravitational field. Its key ingredient is a nontrivial fixed point of the theory's renormalization group flow which controls the behavior of the coupling constants in the ultraviolet (UV) regime and renders physical quantities safe from divergences. Although originally proposed by Steven Weinberg to find a theory of quantum gravity, the idea of a nontrivial fixed point providing a possible UV completion can be applied also to other field theories, in particular to perturbatively nonrenormalizable ones. In this respect, it is similar to quantum triviality.
The light-front quantization of quantum field theories provides a useful alternative to ordinary equal-time quantization. In particular, it can lead to a relativistic description of bound systems in terms of quantum-mechanical wave functions. The quantization is based on the choice of light-front coordinates, where plays the role of time and the corresponding spatial coordinate is . Here, is the ordinary time, is one Cartesian coordinate, and is the speed of light. The other two Cartesian coordinates, and , are untouched and often called transverse or perpendicular, denoted by symbols of the type . The choice of the frame of reference where the time and -axis are defined can be left unspecified in an exactly soluble relativistic theory, but in practical calculations some choices may be more suitable than others.
Higher-spin theory or higher-spin gravity is a common name for field theories that contain massless fields of spin greater than two. Usually, the spectrum of such theories contains the graviton as a massless spin-two field, which explains the second name. Massless fields are gauge fields and the theories should be (almost) completely fixed by these higher-spin symmetries. Higher-spin theories are supposed to be consistent quantum theories and, for this reason, to give examples of quantum gravity. Most of the interest in the topic is due to the AdS/CFT correspondence where there is a number of conjectures relating higher-spin theories to weakly coupled conformal field theories. It is important to note that only certain parts of these theories are known at present and not many examples have been worked out in detail except some specific toy models.
Hamiltonian truncation is a numerical method used to study quantum field theories (QFTs) in spacetime dimensions. Hamiltonian truncation is an adaptation of the Rayleigh–Ritz method from quantum mechanics. It is closely related to the exact diagonalization method used to treat spin systems in condensed matter physics. The method is typically used to study QFTs on spacetimes of the form , specifically to compute the spectrum of the Hamiltonian along . A key feature of Hamiltonian truncation is that an explicit ultraviolet cutoff is introduced, akin to the lattice spacing a in lattice Monte Carlo methods. Since Hamiltonian truncation is a nonperturbative method, it can be used to study strong-coupling phenomena like spontaneous symmetry breaking.
In theoretical physics, more specifically in quantum field theory and supersymmetry, supersymmetric Yang–Mills, also known as super Yang–Mills and abbreviated to SYM, is a supersymmetric generalization of Yang–Mills theory, which is a gauge theory that plays an important part in the mathematical formulation of forces in particle physics. It is a special case of 4D N = 1 global supersymmetry.
{{cite journal}}
: Cite journal requires |journal=
(help) A mathematical introduction and historical overview with a stress on group theory and the application in high-energy physics.