In statistical mechanics, the hard hexagon model is a 2-dimensional lattice model of a gas, where particles are allowed to be on the vertices of a triangular lattice but no two particles may be adjacent.
The model was solved by Baxter ( 1980 ), who found that it was related to the Rogers–Ramanujan identities.
The hard hexagon model occurs within the framework of the grand canonical ensemble, where the total number of particles (the "hexagons") is allowed to vary naturally, and is fixed by a chemical potential. In the hard hexagon model, all valid states have zero energy, and so the only important thermodynamic control variable is the ratio of chemical potential to temperature μ/(kT). The exponential of this ratio, z = exp(μ/(kT)) is called the activity and larger values correspond roughly to denser configurations.
For a triangular lattice with N sites, the grand partition function is
where g(n, N) is the number of ways of placing n particles on distinct lattice sites such that no 2 are adjacent. The function κ is defined by
so that log(κ) is the free energy per unit site. Solving the hard hexagon model means (roughly) finding an exact expression for κ as a function of z.
The mean densityρ is given for small z by
The vertices of the lattice fall into 3 classes numbered 1, 2, and 3, given by the 3 different ways to fill space with hard hexagons. There are 3 local densities ρ1, ρ2, ρ3, corresponding to the 3 classes of sites. When the activity is large the system approximates one of these 3 packings, so the local densities differ, but when the activity is below a critical point the three local densities are the same. The critical point separating the low-activity homogeneous phase from the high-activity ordered phase is with golden ratio φ. Above the critical point the local densities differ and in the phase where most hexagons are on sites of type 1 can be expanded as
The solution is given for small values of z < zc by
where
For large z > zc the solution (in the phase where most occupied sites have type 1) is given by
The functions G and H turn up in the Rogers–Ramanujan identities, and the function Q is the Euler function, which is closely related to the Dedekind eta function. If x = e2πiτ, then x−1/60G(x), x11/60H(x), x−1/24P(x), z, κ, ρ, ρ1, ρ2, and ρ3 are modular functions of τ, while x1/24Q(x) is a modular form of weight 1/2. Since any two modular functions are related by an algebraic relation, this implies that the functions κ, z, R, ρ are all algebraic functions of each other (of quite high degree) ( Joyce 1988 ). In particular, the value of κ(1), which Eric Weisstein dubbed the hard hexagon entropy constant( Weisstein ), is an algebraic number of degree 24 equal to 1.395485972... ( OEIS: A085851 ).
The hard hexagon model can be defined similarly on the square and honeycomb lattices. No exact solution is known for either of these models, but the critical point zc is near 3.7962±0.0001 for the square lattice and 7.92±0.08 for the honeycomb lattice; κ(1) is approximately 1.503048082... ( OEIS: A085850 ) for the square lattice and 1.546440708... for the honeycomb lattice ( Baxter 1999 ).
In complex analysis, an entire function, also called an integral function, is a complex-valued function that is holomorphic on the whole complex plane. Typical examples of entire functions are polynomials and the exponential function, and any finite sums, products and compositions of these, such as the trigonometric functions sine and cosine and their hyperbolic counterparts sinh and cosh, as well as derivatives and integrals of entire functions such as the error function. If an entire function has a root at , then , taking the limit value at , is an entire function. On the other hand, the natural logarithm, the reciprocal function, and the square root are all not entire functions, nor can they be continued analytically to an entire function.
In physics, a partition function describes the statistical properties of a system in thermodynamic equilibrium. Partition functions are functions of the thermodynamic state variables, such as the temperature and volume. Most of the aggregate thermodynamic variables of the system, such as the total energy, free energy, entropy, and pressure, can be expressed in terms of the partition function or its derivatives. The partition function is dimensionless.
In statistics, the Fisher transformation of a Pearson correlation coefficient is its inverse hyperbolic tangent (artanh). When the sample correlation coefficient r is near 1 or -1, its distribution is highly skewed, which makes it difficult to estimate confidence intervals and apply tests of significance for the population correlation coefficient ρ. The Fisher transformation solves this problem by yielding a variable whose distribution is approximately normally distributed, with a variance that is stable over different values of r.
In group theory, restriction forms a representation of a subgroup using a known representation of the whole group. Restriction is a fundamental construction in representation theory of groups. Often the restricted representation is simpler to understand. Rules for decomposing the restriction of an irreducible representation into irreducible representations of the subgroup are called branching rules, and have important applications in physics. For example, in case of explicit symmetry breaking, the symmetry group of the problem is reduced from the whole group to one of its subgroups. In quantum mechanics, this reduction in symmetry appears as a splitting of degenerate energy levels into multiplets, as in the Stark or Zeeman effect.
In mathematics, the von Mangoldt function is an arithmetic function named after German mathematician Hans von Mangoldt. It is an example of an important arithmetic function that is neither multiplicative nor additive.
In mathematics, the Eisenstein integers, occasionally also known as Eulerian integers, are the complex numbers of the form
In mathematics, and particularly in the field of complex analysis, the Hadamard factorization theorem asserts that every entire function with finite order can be represented as a product involving its zeroes and an exponential of a polynomial. It is named for Jacques Hadamard.
In statistical mechanics, the radial distribution function, in a system of particles, describes how density varies as a function of distance from a reference particle.
In conformal field theory and representation theory, a W-algebra is an associative algebra that generalizes the Virasoro algebra. W-algebras were introduced by Alexander Zamolodchikov, and the name "W-algebra" comes from the fact that Zamolodchikov used the letter W for one of the elements of one of his examples.
The lattice Boltzmann methods (LBM), originated from the lattice gas automata (LGA) method (Hardy-Pomeau-Pazzis and Frisch-Hasslacher-Pomeau models), is a class of computational fluid dynamics (CFD) methods for fluid simulation. Instead of solving the Navier–Stokes equations directly, a fluid density on a lattice is simulated with streaming and collision (relaxation) processes. The method is versatile as the model fluid can straightforwardly be made to mimic common fluid behaviour like vapour/liquid coexistence, and so fluid systems such as liquid droplets can be simulated. Also, fluids in complex environments such as porous media can be straightforwardly simulated, whereas with complex boundaries other CFD methods can be hard to work with.
In mathematics, a mock modular form is the holomorphic part of a harmonic weak Maass form, and a mock theta function is essentially a mock modular form of weight 1/2. The first examples of mock theta functions were described by Srinivasa Ramanujan in his last 1920 letter to G. H. Hardy and in his lost notebook. Sander Zwegers discovered that adding certain non-holomorphic functions to them turns them into harmonic weak Maass forms.
The Timoshenko–Ehrenfest beam theory was developed by Stephen Timoshenko and Paul Ehrenfest early in the 20th century. The model takes into account shear deformation and rotational bending effects, making it suitable for describing the behaviour of thick beams, sandwich composite beams, or beams subject to high-frequency excitation when the wavelength approaches the thickness of the beam. The resulting equation is of 4th order but, unlike Euler–Bernoulli beam theory, there is also a second-order partial derivative present. Physically, taking into account the added mechanisms of deformation effectively lowers the stiffness of the beam, while the result is a larger deflection under a static load and lower predicted eigenfrequencies for a given set of boundary conditions. The latter effect is more noticeable for higher frequencies as the wavelength becomes shorter, and thus the distance between opposing shear forces decreases.
In mathematics, the spectral theory of ordinary differential equations is the part of spectral theory concerned with the determination of the spectrum and eigenfunction expansion associated with a linear ordinary differential equation. In his dissertation, Hermann Weyl generalized the classical Sturm–Liouville theory on a finite closed interval to second order differential operators with singularities at the endpoints of the interval, possibly semi-infinite or infinite. Unlike the classical case, the spectrum may no longer consist of just a countable set of eigenvalues, but may also contain a continuous part. In this case the eigenfunction expansion involves an integral over the continuous part with respect to a spectral measure, given by the Titchmarsh–Kodaira formula. The theory was put in its final simplified form for singular differential equations of even degree by Kodaira and others, using von Neumann's spectral theorem. It has had important applications in quantum mechanics, operator theory and harmonic analysis on semisimple Lie groups.
Discrete Morse theory is a combinatorial adaptation of Morse theory developed by Robin Forman. The theory has various practical applications in diverse fields of applied mathematics and computer science, such as configuration spaces, homology computation, denoising, mesh compression, and topological data analysis.
Miniaturizing components has always been a primary goal in the semiconductor industry because it cuts production cost and lets companies build smaller computers and other devices. Miniaturization, however, has increased dissipated power per unit area and made it a key limiting factor in integrated circuit performance. Temperature increase becomes relevant for relatively small-cross-sections wires, where it may affect normal semiconductor behavior. Besides, since the generation of heat is proportional to the frequency of operation for switching circuits, fast computers have larger heat generation than slow ones, an undesired effect for chips manufacturers. This article summaries physical concepts that describe the generation and conduction of heat in an integrated circuit, and presents numerical methods that model heat transfer from a macroscopic point of view.
In cryptography, Learning with errors (LWE) is a mathematical problem that is widely used in cryptography to create secure encryption algorithms. It is based on the idea of representing secret information as a set of equations with errors. In other words, LWE is a way to hide the value of a secret by introducing noise to it. In more technical terms, it refers to the computational problem of inferring a linear -ary function over a finite ring from given samples some of which may be erroneous. The LWE problem is conjectured to be hard to solve, and thus to be useful in cryptography.
In mathematics, the connective constant is a numerical quantity associated with self-avoiding walks on a lattice. It is studied in connection with the notion of universality in two-dimensional statistical physics models. While the connective constant depends on the choice of lattice so itself is not universal, it is nonetheless an important quantity that appears in conjectures for universal laws. Furthermore, the mathematical techniques used to understand the connective constant, for example in the recent rigorous proof by Duminil-Copin and Smirnov that the connective constant of the hexagonal lattice has the precise value , may provide clues to a possible approach for attacking other important open problems in the study of self-avoiding walks, notably the conjecture that self-avoiding walks converge in the scaling limit to the Schramm–Loewner evolution.
Quantum stochastic calculus is a generalization of stochastic calculus to noncommuting variables. The tools provided by quantum stochastic calculus are of great use for modeling the random evolution of systems undergoing measurement, as in quantum trajectories. Just as the Lindblad master equation provides a quantum generalization to the Fokker–Planck equation, quantum stochastic calculus allows for the derivation of quantum stochastic differential equations (QSDE) that are analogous to classical Langevin equations.
Short integer solution (SIS) and ring-SIS problems are two average-case problems that are used in lattice-based cryptography constructions. Lattice-based cryptography began in 1996 from a seminal work by Miklós Ajtai who presented a family of one-way functions based on SIS problem. He showed that it is secure in an average case if the shortest vector problem (where for some constant ) is hard in a worst-case scenario.
The Kaniadakis Erlang distribution is a family of continuous statistical distributions, which is a particular case of the κ-Gamma distribution, when and positive integer. The first member of this family is the κ-exponential distribution of Type I. The κ-Erlang is a κ-deformed version of the Erlang distribution. It is one example of a Kaniadakis distribution.