This article needs attention from an expert in Physics. Please add a reason or a talk parameter to this template to explain the issue with the article.(September 2023) |
The automatic calculation of particle interaction or decay is part of the computational particle physics branch. It refers to computing tools that help calculating the complex particle interactions as studied in high-energy physics, astroparticle physics and cosmology. The goal of the automation is to handle the full sequence of calculations in an automatic (programmed) way: from the Lagrangian expression describing the physics model up to the cross-sections values and to the event generator software.
Particle accelerators or colliders produce collisions (interactions) of particles (like the electron or the proton). The colliding particles form the Initial State. In the collision, particles can be annihilated or/and exchanged producing possibly different sets of particles, the Final States. The Initial and Final States of the interaction relate through the so-called scattering matrix (S-matrix).
For example, at LEP,
e+
+
e−
→
e+
+
e−
, or
e+
+
e−
→
μ+
+
μ−
are processes where the initial state is an electron and a positron colliding to produce an electron and a positron or two muons of opposite charge: the final states. In these simple cases, no automatic packages are needed and cross-section analytical expressions can be easily derived at least for the lowest approximation: the Born approximation also called the leading order or the tree level (as Feynman diagrams have only trunk and branches, no loops).
But particle physics is now requiring much more complex calculations like at LHC where are protons and is the number of jets of particles initiated by proton constituents (quarks and gluons). The number of subprocesses describing a given process is so large that automatic tools have been developed to mitigate the burden of hand calculations.
Interactions at higher energies open a large spectrum of possible final states and consequently increase the number of processes to compute.
High precision experiments impose the calculation of higher order calculation, namely the inclusion of subprocesses where more than one virtual particle can be created and annihilated during the interaction lapse creating so-called loops which induce much more involved calculations.
Finally new theoretical models like the supersymmetry model (MSSM in its minimal version) predict a flurry of new processes.
The automatic packages, once seen as mere teaching support, have become, this last 10 years an essential component of the data simulation and analysis suite for all experiments. They help constructing event generators and are sometimes viewed as generators of event generators or Meta-generators.
A particle physics model is essentially described by its Lagrangian. To simulate the production of events through event generators, 3 steps have to be taken. The Automatic Calculation project is to create the tools to make those steps as automatic (or programmed) as possible:
I Feynman rules, coupling and mass generation
II Matrix element code generation: Various methods are used to automatically produce the matrix element expression in a computer language (Fortran, C/C++). They use values (i.e. for the masses) or expressions (i.e. for the couplings) produced by step I or model specific libraries constructed by hands (usually heavily relying on Computer algebra languages). When this expression is integrated (usually numerically) over the internal degrees of freedom it will provide the total and differential cross-sections for a given set of initial parameters like the initial state particle energies and polarization.
III Event generator code generation: This code must them be interfaced to other packages to fully provide the actual final state. The various effects or phenomenon that need to be implemeted are:
The interplay or matching of the precise matrix element calculation and the approximations resulting from the simulation of the parton shower gives rise to further complications, either within a given level of precision like at leading order (LO) for the production of n jets or between two levels of precision when tempting to connect matrix element computed at next-to-leading (NLO) (1-loop) or next-to-next-leading order (NNLO) (2-loops) with LO partons shower package.
Several methods have been developed for this matching, including: Subtraction methods.
But the only correct way is to match packages at the same level theoretical accuracy like the NLO matrix element calculation with NLO parton shower packages. This is currently in development.
The idea of automation of the calculations in high-energy physics is not new. It dates back to the 1960s when packages such as SCHOONSCHIP and then REDUCE had been developed.
These are symbolic manipulation codes that automatize the algebraic parts of a matrix element evaluation, like traces on Dirac matrices and contraction of Lorentz indices. Such codes have evolved quite a lot with applications not only optimized for high-energy physics like FORM but also more general purpose programs like Mathematica and Maple.
Generation of QED Feynman graphs at any order in the coupling constant was automatized in the late 70's[15]. One of the first major application of these early developments in this field was the calculation of the anomalous magnetic moments of the electron and the muon[16]. The first automatic system incorporating all the steps for the calculation of a cross section, from Feynman graph generation, amplitude generation through a REDUCE source code that produces a FORTRAN code, phase space integration and event generation with BASES/SPRING[17] is GRAND[18]. It was limited to tree-level processes in QED. In the early nineties, a few groups started to develop packages aiming at the automation in the SM[19]. [1] [2] [3] [4] [5] [6] [7] [8] [9] [10]
Feynman amplitudes are written in terms of spinor products of wave functions for massless fermions, and then evaluated numerically before the amplitudes are squared. Taking into account fermion masses implies that Feynman amplitudes are decomposed into vertex amplitudes by splitting the internal lines into wave function of fermions and polarization vectors of gauge bosons.
All helicity configuration can be computed independently.
The method is similar to the previous one, but the numerical calculation is performed after squaring the Feynman Amplitude. The final expression is shorter and therefore faster to compute, but independent helicity information are not anymore available.
The scattering amplitude is evaluated recursively through a set of Dyson-Schwinger equations. The computational cost of this algorithm grows asymptotically as 3n, where n is the number of particles involved in the process, compared to n! in the traditional Feynman graphs approach. Unitary gauge is used and mass effects are available as well. Additionally, the color and helicity structures are appropriately transformed so the usual summation is replaced by the Monte Carlo techniques. [11]
This section is empty. You can help by adding to it. (August 2022) |
The integration of the "matrix element" over the multidimensional internal parameters phase space provides the total and differential cross-sections. Each point of this phase space is associated to an event probability. This is used to randomly generate events closely mimicking experimental data. This is called event generation, the first step in the complete chain of event simulation. The initial and final state particles can be elementary particles like electrons, muons, or photons but also partons (protons and neutrons).
More effects must then be implemented to reproduce real life events as those detected at the colliders.
The initial electron or positron may undergo radiation before they actually interact: initial state radiation and beamstrahlung.
The bare partons that do not exist in nature (they are confined inside the hadrons) must be so to say dressed so that they form the known hadrons or mesons. They are made in two steps: parton shower and hadronization.
When the initial state particles are protons at high energy, it is only their constituents which interact. Therefore, the specific parton that will experience the "hard interaction" has to be selected. Structure functions must therefore be implemented. The other parton may interact "softly", and must also be simulated as they contribute to the complexity of the event: the underlying event.
This section is empty. You can help by adding to it. (August 2015) |
This section is empty. You can help by adding to it. (August 2015) |
This section is empty. You can help by adding to it. (August 2015) |
This section is empty. You can help by adding to it. (August 2015) |
The fragmentation function (F.F.) is a probability distribution function. It is used to find the density function of fragmented mesons in hadron -hadron collision.
The structure function, like the fragmentation function, is also a probability density function. It is analogous to the structure factor in solid-state physics.
This section is empty. You can help by adding to it. (August 2015) |
This section is empty. You can help by adding to it. (August 2015) |
Automatic software packages can be useful in exploring a number of Beyond the Standard Model (BSM) theories, such as the Minimal Supersymmetric Standard Model (MSSM), to predict and understand possible particle interactions in future physics experiments.
Several computation issues need to be considered for automatic calculations. For example one scenario is the fact that special functions often need to be calculated in these software packages, both/either algebraically and/or numerically. For algebraic calculations, symbolic packages e.g. Maple, Mathematica often need to consider abstract, mathematical structures in subatomic particle collisions and emissions.
This section is empty. You can help by adding to it. (August 2015) |
This section is empty. You can help by adding to it. (August 2015) |
Name | Model | Max FS | Tested FS | Short description | Publication | Method | Output | Status |
---|---|---|---|---|---|---|---|---|
MadGraph5 | Any Model | 1/2->n | 2->8 | complete, massive, helicity, color, decay chain | what is MG5 | HA (automatic generation) | Output | PD |
Grace | SM/MSSM | 2->n | 2->6 | complete,massive,helicity,color | Manual v2.0 | HA | Output | PD |
CompHEP | Model | Max FS | Tested FS | Short description | Publication | method | Output | Status |
CalcHEP | Model | Max FS | Tested FS | Short description | Publication | Method | Output | Status |
Sherpa | SM/MSSM | 2->n | 2->8 | massive | publication | HA/DS | Output | PD |
GenEva | Model | Max FS | Tested FS | Short description | Publication | Method | Output | Status |
HELAC | Model | Max FS | Tested FS | Short description | Publication | Method | Output | Status |
Name | Model | Max FS | Tested FS | Short description | Publication | Method | Output | Status |
Status: PD: Public Domain,
Model: SM: Standard Model, MSSM: Minimal Supersymmetric Standard Model
Method: HA: Helicity Amplitude, DS: Dyson Schwinger
Output: ME: Matrix Element, CS: Cross-Sections, PEG: Parton level Event Generation, FEG: Full particle level Event Generation
Name | Model | Order tested | Max FS | Tested FS | Short description | Publication | Method | Status |
---|---|---|---|---|---|---|---|---|
Grace L-1 | SM/MSSM | 1-loop | 2->n | 2->4 | complete,massive,helicity,color | NA | Method | NA |
Name | Order | Model | Max FS | Tested FS | Short description | Publication | Method | Status |
A quark is a type of elementary particle and a fundamental constituent of matter. Quarks combine to form composite particles called hadrons, the most stable of which are protons and neutrons, the components of atomic nuclei. All commonly observable matter is composed of up quarks, down quarks and electrons. Owing to a phenomenon known as color confinement, quarks are never found in isolation; they can be found only within hadrons, which include baryons and mesons, or in quark–gluon plasmas. For this reason, much of what is known about quarks has been drawn from observations of hadrons.
In theoretical physics, quantum chromodynamics (QCD) is the theory of the strong interaction between quarks mediated by gluons. Quarks are fundamental particles that make up composite hadrons such as the proton, neutron and pion. QCD is a type of quantum field theory called a non-abelian gauge theory, with symmetry group SU(3). The QCD analog of electric charge is a property called color. Gluons are the force carriers of the theory, just as photons are for the electromagnetic force in quantum electrodynamics. The theory is an important part of the Standard Model of particle physics. A large body of experimental evidence for QCD has been gathered over the years.
The up quark or u quark is the lightest of all quarks, a type of elementary particle, and a significant constituent of matter. It, along with the down quark, forms the neutrons and protons of atomic nuclei. It is part of the first generation of matter, has an electric charge of +2/3 e and a bare mass of 2.2+0.5
−0.4 MeV/c2. Like all quarks, the up quark is an elementary fermion with spin 1/2, and experiences all four fundamental interactions: gravitation, electromagnetism, weak interactions, and strong interactions. The antiparticle of the up quark is the up antiquark, which differs from it only in that some of its properties, such as charge have equal magnitude but opposite sign.
The down quark is a type of elementary particle, and a major constituent of matter. The down quark is the second-lightest of all quarks, and combines with other quarks to form composite particles called hadrons. Down quarks are most commonly found in atomic nuclei, where it combines with up quarks to form protons and neutrons. The proton is made of one down quark with two up quarks, and the neutron is made up of two down quarks with one up quark. Because they are found in every single known atom, down quarks are present in all everyday matter that we interact with.
The Minimal Supersymmetric Standard Model (MSSM) is an extension to the Standard Model that realizes supersymmetry. MSSM is the minimal supersymmetrical model as it considers only "the [minimum] number of new particle states and new interactions consistent with "Reality". Supersymmetry pairs bosons with fermions, so every Standard Model particle has a superpartner yet undiscovered. If discovered, such superparticles could be candidates for dark matter, and could provide evidence for grand unification or the viability of string theory. The failure to find evidence for MSSM using the Large Hadron Collider has strengthened an inclination to abandon it.
In quantum physics, Regge theory is the study of the analytic properties of scattering as a function of angular momentum, where the angular momentum is not restricted to be an integer multiple of ħ but is allowed to take any complex value. The nonrelativistic theory was developed by Tullio Regge in 1959.
In particle physics, flavour or flavor refers to the species of an elementary particle. The Standard Model counts six flavours of quarks and six flavours of leptons. They are conventionally parameterized with flavour quantum numbers that are assigned to all subatomic particles. They can also be described by some of the family symmetries proposed for the quark-lepton generations.
The Drell–Yan process occurs in high energy hadron–hadron scattering. It takes place when a quark of one hadron and an antiquark of another hadron annihilate, creating a virtual photon or Z boson which then decays into a pair of oppositely-charged leptons. Importantly, the energy of the colliding quark-antiquark pair can be almost entirely transformed into the mass of new particles. This process was first suggested by Sidney Drell and Tung-Mow Yan in 1970 to describe the production of lepton–antilepton pairs in high-energy hadron collisions. Experimentally, this process was first observed by J.H. Christenson et al. in proton–uranium collisions at the Alternating Gradient Synchrotron.
Event generators are software libraries that generate simulated high-energy particle physics events. They randomly generate events as those produced in particle accelerators, collider experiments or the early universe. Events come in different types called processes as discussed in the Automatic calculation of particle interaction or decay article.
In particle physics, the parton model is a model of hadrons, such as protons and neutrons, proposed by Richard Feynman. It is useful for interpreting the cascades of radiation produced from quantum chromodynamics (QCD) processes and interactions in high-energy particle collisions.
The Les Houches Accords are agreements between particle physicists to standardize the interface between the matrix element programs and the event generators used to calculate different quantities. The original accord was initially formed in 2001, at a conference in Les Houches, in the French Alps, before it was subsequently expanded.
CompHEP is a software package for automatic computations in high energy physics from Lagrangians to collision events or particle decays.
S-matrix theory was a proposal for replacing local quantum field theory as the basic principle of elementary particle physics.
Computational particle physics refers to the methods and computing tools developed in and used by particle physics research. Like computational chemistry or computational biology, it is, for particle physics both a specific branch and an interdisciplinary field relying on computer science, theoretical and experimental particle physics and mathematics. The main fields of computational particle physics are: lattice field theory, automatic calculation of particle interaction or decay and event generators.
The light-front quantization of quantum field theories provides a useful alternative to ordinary equal-time quantization. In particular, it can lead to a relativistic description of bound systems in terms of quantum-mechanical wave functions. The quantization is based on the choice of light-front coordinates, where plays the role of time and the corresponding spatial coordinate is . Here, is the ordinary time, is one Cartesian coordinate, and is the speed of light. The other two Cartesian coordinates, and , are untouched and often called transverse or perpendicular, denoted by symbols of the type . The choice of the frame of reference where the time and -axis are defined can be left unspecified in an exactly soluble relativistic theory, but in practical calculations some choices may be more suitable than others.
Quark–gluon plasma is an interacting localized assembly of quarks and gluons at thermal and chemical (abundance) equilibrium. The word plasma signals that free color charges are allowed. In a 1987 summary, Léon van Hove pointed out the equivalence of the three terms: quark gluon plasma, quark matter and a new state of matter. Since the temperature is above the Hagedorn temperature—and thus above the scale of light u,d-quark mass—the pressure exhibits the relativistic Stefan-Boltzmann format governed by temperature to the fourth power and many practically massless quark and gluon constituents. It can be said that QGP emerges to be the new phase of strongly interacting matter which manifests its physical properties in terms of nearly free dynamics of practically massless gluons and quarks. Both quarks and gluons must be present in conditions near chemical (yield) equilibrium with their colour charge open for a new state of matter to be referred to as QGP.
In quantum field theory, initial and final state radiation refers to certain kinds of radiative emissions that are not due to particle annihilation. It is important in experimental and theoretical studies of interactions at particle colliders.
The light-front quantization of quantum field theories provides a useful alternative to ordinary equal-time quantization. In particular, it can lead to a relativistic description of bound systems in terms of quantum-mechanical wave functions. The quantization is based on the choice of light-front coordinates, where plays the role of time and the corresponding spatial coordinate is . Here, is the ordinary time, is a Cartesian coordinate, and is the speed of light. The other two Cartesian coordinates, and , are untouched and often called transverse or perpendicular, denoted by symbols of the type . The choice of the frame of reference where the time and -axis are defined can be left unspecified in an exactly soluble relativistic theory, but in practical calculations some choices may be more suitable than others. The basic formalism is discussed elsewhere.
Zvi Bern is an American theoretical particle physicist. He is a professor at University of California, Los Angeles (UCLA).
In quantum field theory, a sum rule is a relation between a static quantity and an integral over a dynamical quantity. Therefore, they have a form such as: