Self-consistency principle in high energy physics

Last updated

The self-consistency principle was established by Rolf Hagedorn in 1965 to explain the thermodynamics of fireballs in high energy physics collisions. A thermodynamical approach to the high energy collisions first proposed by E. Fermi. [1]

Contents

Partition function

The partition function of the fireballs can be written in two forms, one in terms of its density of states, , and the other in terms of its mass spectrum, .

The self-consistency principle says that both forms must be asymptotically equivalent for energies or masses sufficiently high (asymptotic limit). Also, the density of states and the mass spectrum must be asymptotically equivalent in the sense of the weak constraint proposed by Hagedorn [2] as

.

These two conditions are known as the self-consistency principle or bootstrap-idea. After a long mathematical analysis Hagedorn was able to prove that there is in fact and satisfying the above conditions, resulting in

and

with and related by

.

Then the asymptotic partition function is given by

where a singularity is clearly observed for . This singularity determines the limiting temperature in Hagedorn's theory, which is also known as Hagedorn temperature.

Hagedorn was able not only to give a simple explanation for the thermodynamical aspect of high energy particle production, but also worked out a formula for the hadronic mass spectrum and predicted the limiting temperature for hot hadronic systems.

After some time this limiting temperature was shown by N. Cabibbo and G. Parisi to be related to a phase transition, [3] which characterizes by the deconfinement of quarks at high energies. The mass spectrum was further analyzed by Steven Frautschi. [4]

Q-exponential function

The Hagedorn theory was able to describe correctly the experimental data from collision with center-of-mass energies up to approximately 10 GeV, but above this region it failed. In 2000 I. Bediaga, E. M. F. Curado and J. M. de Miranda [5] proposed a phenomenological generalization of Hagedorn's theory by replacing the exponential function that appears in the partition function by the q-exponential function from the Tsallis non-extensive statistics. With this modification the generalized theory was able again to describe the extended experimental data.

In 2012 A. Deppman proposed a non-extensive self-consistent thermodynamical theory [6] that includes the self-consistency principle and the non-extensive statistics. This theory gives as result the same formula proposed by Bediaga et al., which describes correctly the high energy data, but also new formulas for the mass spectrum and density of states of fireball. It also predicts a new limiting temperature and a limiting entropic index.

Related Research Articles

In particle physics, the Dirac equation is a relativistic wave equation derived by British physicist Paul Dirac in 1928. In its free form, or including electromagnetic interactions, it describes all spin-1/2 massive particles, called "Dirac particles", such as electrons and quarks for which parity is a symmetry. It is consistent with both the principles of quantum mechanics and the theory of special relativity, and was the first theory to account fully for special relativity in the context of quantum mechanics. It was validated by accounting for the fine structure of the hydrogen spectrum in a completely rigorous way. It has become vital in the building of the Standard Model.

<span class="mw-page-title-main">Stress–energy tensor</span> Tensor describing energy momentum density in spacetime

The stress–energy tensor, sometimes called the stress–energy–momentum tensor or the energy–momentum tensor, is a tensor physical quantity that describes the density and flux of energy and momentum in spacetime, generalizing the stress tensor of Newtonian physics. It is an attribute of matter, radiation, and non-gravitational force fields. This density and flux of energy and momentum are the sources of the gravitational field in the Einstein field equations of general relativity, just as mass density is the source of such a field in Newtonian gravity.

In the mathematical field of differential geometry, the Riemann curvature tensor or Riemann–Christoffel tensor is the most common way used to express the curvature of Riemannian manifolds. It assigns a tensor to each point of a Riemannian manifold. It is a local invariant of Riemannian metrics which measures the failure of the second covariant derivatives to commute. A Riemannian manifold has zero curvature if and only if it is flat, i.e. locally isometric to the Euclidean space. The curvature tensor can also be defined for any pseudo-Riemannian manifold, or indeed any manifold equipped with an affine connection.

<span class="mw-page-title-main">Poincaré group</span> Group of flat spacetime symmetries

The Poincaré group, named after Henri Poincaré (1905), was first defined by Hermann Minkowski (1908) as the isometry group of Minkowski spacetime. It is a ten-dimensional non-abelian Lie group that is of importance as a model in our understanding of the most basic fundamentals of physics.

<span class="mw-page-title-main">Onsager reciprocal relations</span> Relations between flows and forces, or gradients, in thermodynamic systems

In thermodynamics, the Onsager reciprocal relations express the equality of certain ratios between flows and forces in thermodynamic systems out of equilibrium, but where a notion of local equilibrium exists.

In statistics, the Neyman–Pearson lemma describes the existence and uniqueness of the likelihood ratio as a uniformly most powerful test in certain contexts. It was introduced by Jerzy Neyman and Egon Pearson in a paper in 1933. The Neyman–Pearson lemma is part of the Neyman–Pearson theory of statistical testing, which introduced concepts like errors of the second kind, power function, and inductive behavior. The previous Fisherian theory of significance testing postulated only one hypothesis. By introducing a competing hypothesis, the Neyman–Pearsonian flavor of statistical testing allows investigating the two types of errors. The trivial cases where one always rejects or accepts the null hypothesis are of little interest but it does prove that one must not relinquish control over one type of error while calibrating the other. Neyman and Pearson accordingly proceeded to restrict their attention to the class of all level tests while subsequently minimizing type II error, traditionally denoted by . Their seminal paper of 1933, including the Neyman–Pearson lemma, comes at the end of this endeavor, not only showing the existence of tests with the most power that retain a prespecified level of type I error, but also providing a way to construct such tests. The Karlin-Rubin theorem extends the Neyman–Pearson lemma to settings involving composite hypotheses with monotone likelihood ratios.

<span class="mw-page-title-main">Fermi's interaction</span> Mechanism of beta decay proposed in 1933

In particle physics, Fermi's interaction is an explanation of the beta decay, proposed by Enrico Fermi in 1933. The theory posits four fermions directly interacting with one another. This interaction explains beta decay of a neutron by direct coupling of a neutron with an electron, a neutrino and a proton.

When studying and formulating Albert Einstein's theory of general relativity, various mathematical structures and techniques are utilized. The main tools used in this geometrical theory of gravitation are tensor fields defined on a Lorentzian manifold representing spacetime. This article is a general description of the mathematics of general relativity.

<span class="mw-page-title-main">Covariant formulation of classical electromagnetism</span> Ways of writing certain laws of physics

The covariant formulation of classical electromagnetism refers to ways of writing the laws of classical electromagnetism in a form that is manifestly invariant under Lorentz transformations, in the formalism of special relativity using rectilinear inertial coordinate systems. These expressions both make it simple to prove that the laws of classical electromagnetism take the same form in any inertial coordinate system, and also provide a way to translate the fields and forces from one frame to another. However, this is not as general as Maxwell's equations in curved spacetime or non-rectilinear coordinate systems.

<span class="mw-page-title-main">Maxwell's equations in curved spacetime</span> Electromagnetism in general relativity

In physics, Maxwell's equations in curved spacetime govern the dynamics of the electromagnetic field in curved spacetime or where one uses an arbitrary coordinate system. These equations can be viewed as a generalization of the vacuum Maxwell's equations which are normally formulated in the local coordinates of flat spacetime. But because general relativity dictates that the presence of electromagnetic fields induce curvature in spacetime, Maxwell's equations in flat spacetime should be viewed as a convenient approximation.

In the theory of general relativity, a stress–energy–momentum pseudotensor, such as the Landau–Lifshitz pseudotensor, is an extension of the non-gravitational stress–energy tensor that incorporates the energy–momentum of gravity. It allows the energy–momentum of a system of gravitating matter to be defined. In particular it allows the total of matter plus the gravitating energy–momentum to form a conserved current within the framework of general relativity, so that the total energy–momentum crossing the hypersurface of any compact space–time hypervolume vanishes.

In mathematical finance, the SABR model is a stochastic volatility model, which attempts to capture the volatility smile in derivatives markets. The name stands for "stochastic alpha, beta, rho", referring to the parameters of the model. The SABR model is widely used by practitioners in the financial industry, especially in the interest rate derivative markets. It was developed by Patrick S. Hagan, Deep Kumar, Andrew Lesniewski, and Diana Woodward.

Bayesian linear regression is a type of conditional modeling in which the mean of one variable is described by a linear combination of other variables, with the goal of obtaining the posterior probability of the regression coefficients and ultimately allowing the out-of-sample prediction of the regressandconditional on observed values of the regressors. The simplest and most widely used version of this model is the normal linear model, in which given is distributed Gaussian. In this model, and under a particular choice of prior probabilities for the parameters—so-called conjugate priors—the posterior can be found analytically. With more arbitrarily chosen priors, the posteriors generally have to be approximated.

A ratio distribution is a probability distribution constructed as the distribution of the ratio of random variables having two other known distributions. Given two random variables X and Y, the distribution of the random variable Z that is formed as the ratio Z = X/Y is a ratio distribution.

Diffusivity, mass diffusivity or diffusion coefficient is usually written as the proportionality constant between the molar flux due to molecular diffusion and the negative value of the gradient in the concentration of the species. More accurately, the diffusion coefficient times the local concentration is the proportionality constant between the negative value of the mole fraction gradient and the molar flux. This distinction is especially significant in gaseous systems with strong temperature gradients. Diffusivity derives its definition from Fick's law and plays a role in numerous other equations of physical chemistry.

<span class="mw-page-title-main">Dynamic fluid film equations</span>

Fluid films, such as soap films, are commonly encountered in everyday experience. A soap film can be formed by dipping a closed contour wire into a soapy solution as in the figure on the right. Alternatively, a catenoid can be formed by dipping two rings in the soapy solution and subsequently separating them while maintaining the coaxial configuration.

<span class="mw-page-title-main">Differential optical absorption spectroscopy</span>

In atmospheric chemistry, differential optical absorption spectroscopy (DOAS) is used to measure concentrations of trace gases. When combined with basic optical spectrometers such as prisms or diffraction gratings and automated, ground-based observation platforms, it presents a cheap and powerful means for the measurement of trace gas species such as ozone and nitrogen dioxide. Typical setups allow for detection limits corresponding to optical depths of 0.0001 along lightpaths of up to typically 15 km and thus allow for the detection also of weak absorbers, such as water vapour, Nitrous acid, Formaldehyde, Tetraoxygen, Iodine oxide, Bromine oxide and Chlorine oxide.

In queueing theory, a discipline within the mathematical theory of probability, a heavy traffic approximation involves the matching of a queueing model with a diffusion process under some limiting conditions on the model's parameters. The first such result was published by John Kingman, who showed that when the utilisation parameter of an M/M/1 queue is near 1, a scaled version of the queue length process can be accurately approximated by a reflected Brownian motion.

In experimental physics, researchers have proposed non-extensive self-consistent thermodynamic theory to describe phenomena observed in the Large Hadron Collider (LHC). This theory investigates a fireball for high-energy particle collisions, while using Tsallis non-extensive thermodynamics. Fireballs lead to the bootstrap idea, or self-consistency principle, just as in the Boltzmann statistics used by Rolf Hagedorn. Assuming the distribution function gets variations, due to possible symmetrical change, Abdel Nasser Tawfik applied the non-extensive concepts of high-energy particle production.

The generalized functional linear model (GFLM) is an extension of the generalized linear model (GLM) that allows one to regress univariate responses of various types on functional predictors, which are mostly random trajectories generated by a square-integrable stochastic processes. Similarly to GLM, a link function relates the expected value of the response variable to a linear predictor, which in case of GFLM is obtained by forming the scalar product of the random predictor function with a smooth parameter function . Functional Linear Regression, Functional Poisson Regression and Functional Binomial Regression, with the important Functional Logistic Regression included, are special cases of GFLM. Applications of GFLM include classification and discrimination of stochastic processes and functional data.

References

  1. Fermi, E. (1950-07-01). "High Energy Nuclear Events". Progress of Theoretical Physics. 5 (4). Oxford University Press (OUP): 570–583. doi: 10.1143/ptp/5.4.570 . ISSN   0033-068X.
  2. R. Hagedorn, Suppl. Al Nuovo Cimento 3 (1965) 147.
  3. Cabibbo, N.; Parisi, G. (1975). "Exponential hadronic spectrum and quark liberation". Physics Letters B. 59 (1). Elsevier BV: 67–69. doi:10.1016/0370-2693(75)90158-6. ISSN   0370-2693.
  4. Frautschi, Steven (1971-06-01). "Statistical Bootstrap Model of Hadrons". Physical Review D. 3 (11). American Physical Society (APS): 2821–2834. doi:10.1103/physrevd.3.2821. ISSN   0556-2821.
  5. Bediaga, I.; Curado, E.M.F.; de Miranda, J.M. (2000). "A nonextensive thermodynamical equilibrium approach in e+e−→ hadrons". Physica A: Statistical Mechanics and Its Applications. 286 (1–2): 156–163. arXiv: hep-ph/9905255 . doi:10.1016/s0378-4371(00)00368-x. ISSN   0378-4371. S2CID   14207129.
  6. Deppman, A. (2012). "Self-consistency in non-extensive thermodynamics of highly excited hadronic states". Physica A: Statistical Mechanics and Its Applications. 391 (24). Elsevier BV: 6380–6385. arXiv: 1205.0455 . doi: 10.1016/j.physa.2012.07.071 . ISSN   0378-4371.